1 Introduction

Special functions, mathematical physics, and orthogonal polynomials are closely related [1, 6, 7, 20, 23, 24]. Special matrix functions appear in connection with statistics, mathematical physics, theoretical physics, Lie group theory, group representation theory, number theory and orthogonal matrix polynomials [9, 19]. Hermite and Laguerre matrix polynomials were introduced and studied in [2, 3, 5, 15, 16, 18, 21, 2531]. Important connections between orthogonal matrix polynomials and matrix differential equations of the second order appear in [4, 10, 11, 22]. Jódar and Cortés [12] introduced and studied the hypergeometric matrix function and the hypergeometric matrix differential equation in and the explicit closed form general solution of it has been given in [13]. The reason of interest for this family of hypergeometric matrix functions is due to their intrinsic mathematical importance.

The primary goal of this paper is to consider a system of matrix polynomials, namely the Chebyshev matrix polynomials. The structure of this paper is organized as follows. In Sect. 2, a definition of Chebyshev matrix polynomials are given. Some differential recurrence relations, in particular Chebyshev’s matrix differential equation is established in Sect. 3. Expansion of the series and a connection between Chebyshev’s and Hermite’s matrix polynomials recently introduced in Section 4. Finally in Sect. 5, we obtain the Christoffel formula of summation.

Throughout this paper, for a matrix \(A\) in \(\mathbb {C}^{N \times N}\), its spectrum \(\sigma (A)\) denotes the set of all eigenvalues of \(A\). If \(A\) is a matrix in \(\mathbb {C}^{N\times N}\), its two-norm denoted by \(||A||_2\) and is defined by

$$\begin{aligned} ||A||_2=\sup _{x\ne 0}\frac{||Ax||_2}{||x||_2} \end{aligned}$$

where for a vector \(y\) in \(\mathbb {C}^{N}\), \(||y||_2\) denotes the usual Euclidean norm of \(y\), \(||y||_2=(y^Ty)^\frac{1}{2}\). \(I\) and \(O\) will denote the identity matrix and the null matrix in \(\mathbb {C}^{N\times N}\), respectively.

If \(f(z)\) and \(g(z)\) are holomorphic functions of the complex variable \(z\), which are defined in an open set \(\Omega \) of the complex plane, and if \(A\) is a matrix in \(\mathbb {C}^{N\times N}\) such that \(\sigma (A)\subset \Omega \), then from the properties of the matrix functional calculus [3, 15], it follows that

$$\begin{aligned} f(A)g(A)= g(A)f(A). \end{aligned}$$
(1.1)

If \(A\) is a matrix with \(\sigma (A)\subset D_{0}\), then \(A^{\frac{1}{2}}=\sqrt{A}=\exp (\frac{1}{2}\log (A))\) denotes the image by \(z^\frac{1}{2}=\sqrt{z}=\exp (\frac{1}{2}\log (z))\) of the matrix functional calculus acting on the matrix \(A\). We say that \(A\) is a positive stable matrix [5, 14, 15] if

$$\begin{aligned} R(z)>0,\quad \forall \ z \in \sigma (A). \end{aligned}$$
(1.2)

Throughout this study, consider the complex space \(\mathbb {C}^{N\times N}\) of all square complex matrices of common order \(N\). If \(A_{0}\), \(A_{1}\),...,\(A_{n}\) are elements of \(\mathbb {C}^{N\times N}\) and \(A_{n}\ne {0}\), then we call

$$\begin{aligned} P_{n}(x)=A_{n}x^{n}+A_{n-1}x^{n-1}+\cdots +A_{1}x+A_{0} \end{aligned}$$
(1.3)

a matrix polynomial of degree \(n\) in \(x\). Then from [20] it follows that

$$\begin{aligned} (p)_{n}=p(p+1)(p+2)(p+3)\cdots (p+n-1);\quad n\ge 1;\quad (p)_0=1. \end{aligned}$$
(1.4)

From (1.4), it is easy to find that

$$\begin{aligned} (p)_{n-k}=\frac{(-1)^k(p)_n}{(1-p-n)_k};\quad 0\le k\le n. \end{aligned}$$
(1.5)

From the relation (1.5) of [1, 20], one obtains

$$\begin{aligned} \frac{(-1)^k}{(n-k)!}=\frac{(-n)_k}{n!};\quad 0\le k\le n. \end{aligned}$$
(1.6)

The hypergeometric function \(F(a,b;c;z)\) has been given in the form [1, 20]

$$\begin{aligned} F(a,b;c;z)=\sum _{k=0}^{\infty }\frac{(a)_k(b)_k}{n!(c)_k}z^k. \end{aligned}$$
(1.7)

We will exploit the following relation due to

$$\begin{aligned} (I-x\sqrt{2A})^{-1}=\sum _{n=0}^{\infty }(x\sqrt{2A})^{n};\Vert x\sqrt{2A}\Vert <1\quad where\quad (x\sqrt{2A})^{0}=I. \end{aligned}$$
(1.8)

It has been seen by Defez and Jódar [3] that, for matrices \(A(k,n)\) and \(B(k,n)\) are matrices in \(\mathbb {C}^{N\times N}\) for \(n\ge 0\), \(k\ge 0\), the following relations are satisfied

$$\begin{aligned} \sum _{n=0}^{\infty }\sum _{k=0}^{\infty }A(k,n)=\sum _{n=0}^{\infty }\sum _{k=0}^{[\frac{1}{2}n]}A(k,n-2k) \end{aligned}$$
(1.9)

and

$$\begin{aligned} \sum _{n=0}^{\infty }\sum _{k=0}^{\infty }B(k,n)=\sum _{n=0}^{\infty }\sum _{k=0}^{n}B(k,n-k). \end{aligned}$$
(1.10)

Similarly, we can write

$$\begin{aligned}&\sum _{n=0}^{\infty }\sum _{k=0}^{[\frac{1}{2}n]}A(k,n)=\sum _{n=0}^{\infty }\sum _{k=0}^{\infty }A(k,n+2k),\end{aligned}$$
(1.11)
$$\begin{aligned}&\sum _{n=0}^{\infty }\sum _{k=0}^{n}A(k,n)=\sum _{n=0}^{\infty }\sum _{k=0}^{[\frac{1}{2}n]}A(k,n-k) \end{aligned}$$
(1.12)

and

$$\begin{aligned} \sum _{n=0}^{\infty }\sum _{k=0}^{n}B(k,n)=\sum _{n=0}^{\infty }\sum _{k=0}^{\infty }B(k,n+k). \end{aligned}$$
(1.13)

If \(A\) is a positive stable matrix in \(\mathbb {C}^{N\times N}\), then the \(n-{th}\) Hermite matrix polynomials was defined by [10]

$$\begin{aligned} H_{n}(x,A)=n!\sum _{k=0}^{[\frac{1}{2}n]}\frac{(-1)^k(x\sqrt{2A})^{n-2k}}{k!(n-2k)!};\quad n\ge 0. \end{aligned}$$
(1.14)

For the sake of clarity, we recall that if \(A\) is a matrix in \(\mathbb {C}^{N\times N}\) satisfies the condition (1.2), than the expansion of \(x^nI\) in a series of Hermite matrix polynomials has been given in [3, 10] in the from

$$\begin{aligned} x^nI=(\sqrt{2A})^{-n}\sum _{k=0}^{[\frac{1}{2}n]}\frac{n!}{k!(n-2k)!}H_{n-2k}(x,A);\quad -\infty < x< \infty . \end{aligned}$$
(1.15)

2 Definition of Chebyshev matrix polynomials

Let \(A\) be a positive stable matrix in \(\mathbb {C}^{N\times N}\) satisfying the condition (1.2). We define the Chebyshev matrix polynomials of the second kind by means of the relation

$$\begin{aligned} F(x,t,A)=(I-xt\sqrt{2A}+t^2I)^{-1}=\sum _{n=0}^{\infty }U_{n}(x,A)t^n;\quad |t|<1 ,\quad |x|\le 1 \end{aligned}$$
(2.1)

where \(I-xt\sqrt{2A}+t^2I\) is an invertible matrix and \(xt\sqrt{2A}-t^2I\) is an invertible matrix. Using (1.7) and (1.11), we have

$$\begin{aligned} (I-xt\sqrt{2A}+t^2I)^{-1}=\sum _{n=0}^{\infty }\sum _{k=0}^{\left[ \frac{1}{2}n\right] }\frac{(-1)^k(1)_{n-k}(x\sqrt{2A})^{n-2k}}{k!(n-2k)!}t^n. \end{aligned}$$
(2.2)

By equating the coefficients of \(t^n\) in (2.1) and (2.2), we obtain an explicit representation of the Chebyshev matrix polynomials of the second kind in the form

$$\begin{aligned} U_{n}(x,A)=\sum _{k=0}^{\left[ \frac{1}{2}n\right] }\frac{(-1)^{k}(n-k)!(x\sqrt{2A})^{n-2k}}{k!(n-2k)!}. \end{aligned}$$
(2.3)

Clearly, \(U_{n}(x,A)\) is a matrix polynomial of degree \(n\) in \(x\). Replacing \(x\) by \(-x\) and \(t\) by \(-t\) in (2.1), the left side does not exchange. Therefore

$$\begin{aligned} U_{n}(-x,A)=(-1)^{n}U_{n}(x,A). \end{aligned}$$
(2.4)

For \(x=0\), it follows

$$\begin{aligned} (I+t^2I)^{-1}=\sum _{n=0}^{\infty }t^{n}U_{n}(0,A). \end{aligned}$$

Also, by (1.7) one gets

$$\begin{aligned} (I+t^2I)^{-1}=\sum _{n=0}^{\infty }(-1)^nt^{2n}I. \end{aligned}$$

Therefore, we have

$$\begin{aligned} U_{2n}(0,A)=(-1)^nI,U_{2n+1}(0,A)=\mathbf 0 . \end{aligned}$$
(2.5)

The explicit representation (2.3) gives

$$\begin{aligned} U_{n}(x,A)=(x\sqrt{2A})^n+\prod _{n-2} \end{aligned}$$

where \(\prod _{n-2}\) is a matrix polynomial of degree \((n-2)\) in \(x\). Consequently, if \(D=\frac{d}{dx}\), then, it follows that

$$\begin{aligned} D^nU_{n}(x,A)=n!(\sqrt{2A})^n. \end{aligned}$$

3 Differential recurrence relations

In this section, the three terms recurrence relation and differential recurrence relations are carried out on the Chebyshev matrix polynomials.

Differentiating (2.1) with respect to \(x\) and \(t\) respectively

$$\begin{aligned} \frac{\partial {F}}{\partial {x}}=\frac{t\sqrt{2A}}{I-xt\sqrt{2A}+t^2I}F \end{aligned}$$
(3.1)

and

$$\begin{aligned} \frac{\partial {F}}{\partial {t}}=\frac{x\sqrt{2A}-2It}{I-\sqrt{2A}xt+It^2}F. \end{aligned}$$
(3.2)

So that the matrix function \(F\) satisfies the partial matrix differential equation

$$\begin{aligned} (x\sqrt{2A}-2tI)\frac{\partial {F}}{\partial {x}}-t\sqrt{2A}\frac{\partial {F}}{\partial {t}}=0. \end{aligned}$$

Therefore, by (2.1) we get

$$\begin{aligned} \sum _{n=0}^{\infty }x\sqrt{2A}DU_{n}(x,A)t^n-\sum _{n=0}^{\infty }n\sqrt{2A}U_{n}(x,A)t^n=\sum _{n=1}^{\infty }2DU_{n-1}(x,A)t^{n}. \end{aligned}$$

Since \(DU_{0}(x,A)=\mathbf 0 \) and for \(n\ge 1\), then we obtain the differential recurrence relation

$$\begin{aligned} x\sqrt{2A}DU_{n}(x,A)-n\sqrt{2A}U_{n}(x,A)=2DU_{n-1}(x,A). \end{aligned}$$
(3.3)

From (3.1) and (3.2) with the aid of (2.1), we get

$$\begin{aligned} \frac{1}{I-xt\sqrt{2A}+t^2I}(I-xt\sqrt{2A}+t^2I)^{-1}=\sum _{n=1}^{\infty }\frac{1}{\sqrt{2A}}DU_{n}(x,A)t^{n-1} \end{aligned}$$
(3.4)

and

$$\begin{aligned} \frac{x\sqrt{2A}-2tI}{I-xt\sqrt{2A}+t^2I}(I-xt\sqrt{2A}+t^2I)^{-1}=\sum _{n=1}^{\infty }nU_{n}(x,A)t^{n-1}. \end{aligned}$$
(3.5)

Note that \(I-t^2I-t(x\sqrt{2A}-2tI)=I-xt\sqrt{2A}+t^2I\). Thus by multiplying (3.4) by \(I-t^2I\) and (3.5) by \(t\) and subtracting (3.5) from (3.4), we obtain

$$\begin{aligned} (n+1)\sqrt{2A}U_{n}(x,A)=DU_{n+1}(x,A)-DU_{n-1}(x,A). \end{aligned}$$
(3.6)

From (3.3) and (3.6), one gets

$$\begin{aligned} x\sqrt{2A}DU_{n}(x,A)=2DU_{n+1}(x,A)-(n+2)\sqrt{2A}U_{n}(x,A). \end{aligned}$$
(3.7)

Substituting \(n-1\) for \(n\) in (3.7) and putting the resulting expression for \(DU_{n-1}(x,A)\) into (3.3), gives

$$\begin{aligned} ((x\sqrt{2A})^2\!-\!4I)DU_{n}(x,A)=nx(\sqrt{2A})^2U_{n}(x,A)\!-\!2(n\!+\!1)\sqrt{2A}U_{n\!-\!1}(x,A). \end{aligned}$$
(3.8)

Now, by multiplying (3.3) by \(((x\sqrt{2A})^2-4I)\) and substituting for \(((x\sqrt{2A})^2-4I)DU_{n}(x,A)\) and \(((x\sqrt{2A})^2-4I)DU_{n-1}(x,A)\) from (3.8) to obtain the three terms recurrence relation in the form

$$\begin{aligned} U_{n}(x,A)=x\sqrt{2A}U_{n-1}(x,A)-U_{n-2}(x,A). \end{aligned}$$
(3.9)

Substituting for \(n\) in (3.6) the values \(n-1\), \(n-2\), ...,\(2\), \(1\), \(0\) and adding we obtain (\(U_{0}(x,A)=I\), \(U'_{0}(x,A)=\mathbf 0 \), \(U'_{1}(x,A)=\sqrt{2A}\))

$$\begin{aligned} DU_{n+1}(x,A)+DU_{n}(x,A)&= \sqrt{2A}\Big [U_{0}(x,A)+2U_{1}(x,A)+3U_{2}(x,A)\nonumber \\&+\cdots +(n+1)U_{n}(x,A)\Big ]=\sqrt{2A}\sum _{i=0}^{n}(i+1)U_{i}(x,A).\qquad \quad \end{aligned}$$
(3.10)

Formulas (3.3), (3.6), (3.7), (3.8) (3.9) and (3.10) are called the recurrence formulas for Chebyshev matrix polynomials. The first few Chebyshev matrix polynomials are listed here,

$$\begin{aligned} U_{0}(x,A)&= I,\\ U_{1}(x,A)&= x\sqrt{2A},\\ U_{2}(x,A)&= (x\sqrt{2A})^2-I,\\ U_{3}(x,A)&= (x\sqrt{2A})^3-2x\sqrt{2A} \end{aligned}$$

and

$$\begin{aligned} U_{4}(x,A)=(x\sqrt{2A})^4-3(x\sqrt{2A})^2+I. \end{aligned}$$

We conclude this section introducing the Chebyshev’s matrix differential equation as follows

In (3.7), replace \(n\) by \(n-1\) and differentiate with respect to \(x\) to find

$$\begin{aligned} x\sqrt{2A}D^2U_{n-1}(x,A)=2D^2U_{n}(x,A)-(n+2)\sqrt{2A}DU_{n-1}(x,A). \end{aligned}$$
(3.11)

Also, by differentiating (3.3) with respect to \(x\), we have

$$\begin{aligned} x\sqrt{2A}D^2U_{n}(x,A)-(n-1)\sqrt{2A}DU_{n}(x,A)=2D^2U_{n-1}(x,A). \end{aligned}$$
(3.12)

From (3.3) and (3.12) by putting \(DU_{n-1}(x,A)\) and \(D^2U_{n}(x,A)\) into (3.11) and rearrangement terms, we obtain the Chebyshev’s matrix differential equation in the form

$$\begin{aligned} (4I\!\!-\!\!(x\sqrt{2A})^2)D^2U_{n}(x,A)\!-\!3x(\sqrt{2A})^2DU_{n}(x,A)\!+\!n(n\!+\!2)(\sqrt{2A})^2U_{n}(x,A)\!=\!0.\qquad \,\,\, \end{aligned}$$
(3.13)

In the following, we can expand the Chebyshev matrix polynomials in series of Hermite matrix polynomials.

4 Expanding of Chebyshev matrix polynomials in series of Hermite matrix polynomials

Employing (2.1) and (1.15) with the aid of (1.11), we consider the series

$$\begin{aligned} \sum _{n=0}^{\infty }U_{n}(x,A)t^n&= \sum _{n=0}^{\infty }\sum _{k=0}^{\left[ \frac{1}{2}n\right] }\frac{(-1)^{k}(n-k)!(x\sqrt{2A})^{n-2k}}{k!(n-2k)!}t^n\nonumber \\&= \sum _{n=0}^{\infty }\sum _{k=0}^{\infty }\frac{(-1)^{k}(n+k)!(x\sqrt{2A})^{n}}{k!n!}t^{n+2k}\nonumber \\&= \sum _{n=0}^{\infty }\sum _{k=0}^{\infty }\sum _{s=0}^{\left[ \frac{1}{2}n\right] }\frac{(-1)^{k}(n+k)!}{k!s!(n-2s)!}H_{n-2s}(x,A)t^{n+2k}. \end{aligned}$$
(4.1)

Since the matrix \(A\) commutes with itself, then we can write (4.1) in the form

$$\begin{aligned} \sum _{n=0}^{\infty }U_{n}(x,A)t^n=\sum _{n=0}^{\infty }\sum _{k=0}^{\infty }\sum _{s=0}^{\left[ \frac{1}{2}n\right] }\frac{(-1)^{k}(n+k)!}{k!s!(n-2s)!}H_{n-2s}(x,A)t^{n+2k}. \end{aligned}$$

Thus

$$\begin{aligned} \sum _{n=0}^{\infty }U_{n}(x,A)t^n=\sum _{n=0}^{\infty }\sum _{k=0}^{\infty }\sum _{s=0}^{\left[ \frac{1}{2}n\right] }\frac{(-1)^{k}(1)_{n+k}}{k!s!(n-2s)!}H_{n-2s}(x,A)t^{n+2k}. \end{aligned}$$
(4.2)

By using (1.11) the expression (4.2) becomes

$$\begin{aligned} \sum _{n=0}^{\infty }U_{n}(x,A)t^n=\sum _{n=0}^{\infty }\sum _{k=0}^{\infty }\sum _{s=0}^{\infty }\frac{(-1)^{k}(1)_{n+k+2s}}{k!s!n!}H_{n}(x,A)t^{n+2k+2s} \end{aligned}$$

this, by using (1.10), yields,

$$\begin{aligned} \sum _{n=0}^{\infty }U_{n}(x,A)t^n=\sum _{n=0}^{\infty }\sum _{k=0}^{\infty }\sum _{s=0}^{k}\frac{(-1)^{k-s}(1)_{n+k+s}}{(k-s)!s!n!}H_{n}(x,A)t^{n+2k}. \end{aligned}$$

Since

$$\begin{aligned} (1)_{n+k+s}=(n+k+1)_{s}(1)_{n+k} \end{aligned}$$

then by using (1.5) and (1.9), it follows

$$\begin{aligned} \sum _{n=0}^{\infty }U_{n}(x,A)t^n&= \sum _{n=0}^{\infty }\sum _{k=0}^{\infty }\sum _{s=0}^{k}\frac{(-1)^{k}(-k)_s(n+k+1)_{s}(1)_{n+k}}{k!s!n!}H_{n}(x,A)t^{n+2k}\nonumber \\&= \sum _{n=0}^{\infty }\sum _{k=0}^{\infty }\frac{(-1)^{k}}{k!n!}\; _2F_{0}(-k,n+k+1;-;1)(1)_{n+k}H_{n}(x,A)t^{n+2k} \\&= \sum _{n=0}^{\infty }\sum _{k=0}^{[\frac{1}{2}n]}\frac{(-1)^{k}}{k!(n-2k)!}\; _2F_{0}(-k,n-k+1;-;1)(1)_{n-k}H_{n-2k}(x,A)t^{n}. \end{aligned}$$

Therefore, by identification of coefficient of \(t^n\), we obtain an expansion of Chebyshev matrix polynomials as a series of Hermite matrix polynomials in the form

$$\begin{aligned} U_{n}(x,A)=\sum _{k=0}^{\left[ \frac{1}{2}n\right] }\frac{(-1)^{k}(n-k)!}{k!(n-2k)!}\; _2F_{0}(-k,n-k+1;-;1)H_{n-2k}(x,A). \end{aligned}$$
(4.3)

In the following, we obtain recurrence formula of summation for the Chebyshev matrix polynomials as follows.

5 The Christoffel formula of summation

The pure recurrence relation of Chebyshev matrix polynomials (3.9), substituting \(n+1\) for \(n\) gives

$$\begin{aligned} U_{n+1}(x,A)=x\sqrt{2A}U_{n}(x,A)-U_{n-1}(x,A);\quad n\ge 1. \end{aligned}$$
(5.1)

We wish to prove the identity

$$\begin{aligned} \sqrt{2A}\sum _{i=0}^{n}U_{i}(x,A)U_{i}(y,A)=\frac{U_{n}(x,A)U_{n+1}(y,A)-U_{n+1}(x,A)U_{n}(y,A)}{y-x}. \end{aligned}$$
(5.2)

Form (5.1), substituting \(i\) for \(n\) and multiplying by

$$\begin{aligned} U_{i+1}(x,A)U_{i}(y,A)-x\sqrt{2A}U_{i}(x,A)U_{i}(y,A)+U_{i-1}(x,A)U_{i}(y,A)=0. \end{aligned}$$
(5.3)

Interchanging \(x\) and \(y\)

$$\begin{aligned} U_{i+1}(y,A)U_{i}(x,A)-y\sqrt{2A}U_{i}(y,A)U_{i}(x,A)+U_{i-1}(y,A)U_{i}(x,A)=0 \end{aligned}$$
(5.4)

subtracting

$$\begin{aligned} (y\!-\!x)\sqrt{2A}U_{i}(y,A)U_{i}(x,A)&= \Big [U_{i+1}(y,A)U_{i}(x,A)\!-\!U_{i+1}(x,A)U_{i}(y,A)\Big ]\nonumber \\&\!+\,\Big [U_{i-1}(y,A)U_{i}(x,A)\!-\!U_{i-1}(x,A)U_{i}(y,A)\Big ].\quad \quad \end{aligned}$$
(5.5)

Setting \(i=0,1,2,\ldots ,n\), we obtain

$$\begin{aligned} \sqrt{2A}(y-x)U_{0}(y,A)U_{0}(x,A)&= U_{1}(y,A)U_{0}(x,A)-U_{1}(x,A)U_{0}(y,A), \\ \sqrt{2A}(y-x)U_{1}(y,A)U_{1}(x,A)&= U_{2}(y,A)U_{1}(x,A)-U_{2}(x,A)U_{1}(y,A) \\&+\,U_{0}(y,A)U_{1}(x,A)-U_{0}(x,A)U_{1}(y,A), \\ (y-x)\sqrt{2A}U_{2}(y,A)U_{2}(x,A)&= U_{3}(y,A)U_{2}(x,A)\\&-\,U_{3}(x,A)U_{2}(y,A)+U_{1}(y,A)U_{2}(x,A)\\&-\,U_{1}(x,A)U_{2}(y,A) \end{aligned}$$

and

$$\begin{aligned} \sqrt{2A}(y-x)U_{n}(y,A)U_{n}(x,A)&= U_{n+1}(y,A)U_{n}(x,A)-U_{n+1}(x,A)U_{n}(y,A)\\&+\,U_{n-1}(y,A)U_{n}(x,A)-U_{n-1}(x,A)U_{n}(y,A) \end{aligned}$$

whence (5.2) follows by addition. Hence the Christoffel formula (5.2) is established.

Finally, the Hermite matrix polynomials of two variables \(H_{n}(x,\frac{1}{t},A)\) [2] will be exploited here to define a matrix version of Chebyshev polynomials of the second kind by means of the integral transform

$$\begin{aligned} U_{n}(x,A)=\frac{1}{n!}\int \limits _{0}^{\infty }\exp (-t){t}^{n}H_{n}(x,\frac{1}{t},A)dt. \end{aligned}$$
(5.6)

In a similar way, we define the Chebyshev matrix polynomials of the first kind [4] as follows

$$\begin{aligned} T_{n}(x,A)=\frac{n}{\sqrt{2A}}\sum _{k=0}^{\left[ \frac{1}{2}n\right] }\frac{(-1)^k(n-k-1)!}{k! (n-2k)!}{(x\sqrt{2A})^{n-2k}};\ n\ge 1,\ T_{0}(x,A)=I. \end{aligned}$$
(5.7)

We obtain the fundamental recurrence relations

$$\begin{aligned} T_{n+1}(x,A)=x\sqrt{2A}T_{n}(x,A)-T_{n-1}(x,A); n \ge 1 \end{aligned}$$
(5.8)

and

$$\begin{aligned} \frac{d}{d x}T_{n}(x,A) = n U_{n-1}(x,A). \end{aligned}$$
(5.9)

Further examples (orthogonal matrix polynomials) proving the usefulness of the present method (integral transform) can be easily worked out, but are not reported here for conciseness.

6 Open problem

One can use the same class of new differential and integral operators for the new matrix polynomials. Hence, new results and further applications can be obtained. Further applications will be discussed in a forthcoming paper.