# Polynomial functions on upper triangular matrix algebras

- First Online:

- Received:
- Accepted:

## Abstract

There are two kinds of polynomial functions on matrix algebras over commutative rings: those induced by polynomials with coefficients in the algebra itself and those induced by polynomials with scalar coefficients. In the case of algebras of upper triangular matrices over a commutative ring, we characterize the former in terms of the latter (which are easier to handle because of substitution homomorphism). We conclude that the set of integer-valued polynomials with matrix coefficients on an algebra of upper triangular matrices is a ring, and that the set of null-polynomials with matrix coefficients on an algebra of upper triangular matrices is an ideal.

### Keywords

Integer-valued polynomials Null polynomials Zero polynomials Polynomial functions Upper triangular matrices Matrix algebras Polynomials on non-commutative algebras Matrices over commutative rings### Mathematics Subject Classification

Primary 13B25 Secondary 13F20 16A42 15A24 05A05 11C08 16P10## 1 Introduction

Polynomial functions on non-commutative algebras over commutative rings come in two flavors: one, induced by polynomials with scalar coefficients, the other, by polynomials with coefficients in the non-commutative algebra itself [1, 4, 5, 6, 7, 8, 9, 11, 12].

For the algebra of upper triangular matrices over a commutative ring, we show how polynomial functions with matrix coefficients can be described in terms of polynomial functions with scalar coefficients. In particular, we express integer-valued polynomials with matrix coefficients in terms of integer-valued polynomials with scalar coefficients. The latter have been studied extensively by Evrard, Fares and Johnson [1] and Peruginelli [6] and have been characterized as polynomials that are integer-valued together with a certain kind of divided differences.

Also, our results have a bearing on several open questions in the subject of polynomial functions on non-commutative algebras. They allow us to answer in the affirmative, in the case of algebras of upper triangular matrices, two questions (see Sect. 5) posed by Werner [12]: whether the set of null-polynomials on a non-commutative algebra forms a two-sided ideal and whether the set of integer-valued polynomials on a non-commutative algebra forms a ring. In the absence of a substitution homomorphism for polynomial functions on non-commutative rings, neither of these properties is a given.

Also, our results on polynomials on upper triangular matrices show that we may be able to describe polynomial functions induced by polynomials with matrix coefficients by polynomial functions induced by polynomials with scalar coefficients, even when the relationship is not as simple as in the case of the full matrix algebra.

*D*be a domain with quotient field

*K*,

*A*a finitely generated torsion-free

*D*-algebra and \(B=A\otimes _D K\). To exclude pathological cases we stipulate that \(A\cap K=D\). Then the set of right integer-valued polynomials on

*A*is defined as

*f*(

*a*) is defined by substitution on the right of the coefficients, \(f(a)=\sum _k b_k a^k\), for \(a\in A\) and \(f(x)=\sum _k b_k x^k\in B[x]\). Left integer-valued polynomials are defined analogously, using left-substitution:

*A*with scalar coefficients:

*A*.

*R*, and \(n\ge 1\), let \(M_n(R)\) denote the ring of \(n\times n\) matrices with entries in

*R*and \(T_n(R)\) the subring of upper triangular matrices, i.e., the ring consisting of \(n\times n\) matrices \(C=(c_{ij})\) with \(c_{ij}\in R\) and

### Remark 1.1

*R*be a commutative ring. We will make extensive use of the ring isomorphism between the ring of polynomials in one variable with coefficients in \(M_n(R)\) and the ring of \(n\times n\) matrices with entries in

*R*[

*x*]:

### Notation 1.2

For \(f\in (M_n(R))[x]\), \(F_k\) denotes the *k*-th coefficient of *f*, and \(f_{ij}^{(k)}\in R\) the (*i*, *j*)-th entry in \(F_k\). When we reinterpret *f* as an element of \(M_n(R[x])\) via the ring isomorphism of Remark 1.1, we denote the (*i*, *j*)-th entry of *f* by \(f_{ij}\).

*i*,

*j*)-the entry of a matrix

*M*. In particular, \(\left[ f(C)\right] _{i\,j}\) is the (

*i*,

*j*)-the entry of

*f*(

*C*), the result of substituting the matrix

*C*for the variable (to the right of the coefficients) in

*f*, and \(\left[ {f(C)_{\ell }}\right] _{i\,j}\) the (

*i*,

*j*)-th entry of \({f(C)_{\ell }}\), the result of substituting

*C*for the variable in

*f*to the left of the coefficients.

We will work in a setting that allows us to consider integer-valued polynomials and null-polynomials on upper triangular matrices at the same time.

From now on, *R* is a commutative ring, *S* a subring of *R*, and *I* an ideal of *S*.

### Notation 1.3

*R*be commutative ring,

*S*a subring of

*R*and

*I*an ideal of

*S*.

### Example 1.4

When *D* is a domain with quotient field *K* and we set \(R=K\) and \(S=I=D\), then \({{\mathrm{Int_R}}}(\mathrm{T}_{n}(S), \mathrm{T}_{n}(I))={{\mathrm{Int_K}}}(\mathrm{T}_{n}(D))\) is the ring of integer-valued polynomials on \(T_n(D)\) with coefficients in *K* and \({{\mathrm{Int}}}_{\mathrm{T}_n(R)}(\mathrm{T}_n(S),\mathrm{T}_n(I))={{\mathrm{Int}}}_{\mathrm{T}_n(K)}(\mathrm{T}_n(D))\), the set of right integer-valued polynomials on \(T_n(D)\) with coefficients in \((T_n(K))\). We will show that the latter set is closed under multiplication and, therefore, a ring, cf. Theorem 5.4.

### Example 1.5

When *R* is a commutative ring and we set \(S=R\) and \(I=(0)\), then \({{\mathrm{Int_R}}}(\mathrm{T}_{n}(S), \mathrm{T}_{n}(I))=\mathrm {N}_{R}(T_{n}(R))\) is the ideal of those polynomials in *R*[*x*] that map every matrix in \(T_n(R)\) to zero, and \({{\mathrm{Int}}}_{\mathrm{T}_n(R)}(\mathrm{T}_n(S),\mathrm{T}_n(I))=\mathrm {N}_{T_n(R)}(T_{n}(R))\) is the set of polynomials in \((T_n(R))[x]\) that map every matrix in \(T_n(R)\) to zero under right substitution. We will show that the latter set is actually an ideal of \((T_n(R))[x]\), cf. Theorem 5.2.

We illustrate here in matrix form our main result on the connection between the two kinds of polynomial functions on upper triangular matrices, those induced by polynomials with matrix coefficients on one hand, and those induced by polynomials with scalar coefficients on the other hand. (For details, see Theorem 4.2.)

### Remark 1.6

- (1)\({{\mathrm{Int}}}_{\mathrm{T}_n(R)}(\mathrm{T}_n(S),\mathrm{T}_n(I))\)$$\begin{aligned} \quad =\begin{pmatrix} {{\mathrm{Int_R}}}(\mathrm{T}_{n}(S), \mathrm{T}_{n}(I))&{}{{\mathrm{Int_R}}}(\mathrm{T}_{n-1}(S), \mathrm{T}_{n-1}(I))&{}\ldots &{}{{\mathrm{Int_R}}}(\mathrm{T}_{2}(S), \mathrm{T}_{2}(I))&{}{{\mathrm{Int_R}}}(S,I) \\ 0&{}{{\mathrm{Int_R}}}(\mathrm{T}_{n-1}(S), \mathrm{T}_{n-1}(I))&{}\ldots &{}{{\mathrm{Int_R}}}(\mathrm{T}_{2}(S), \mathrm{T}_{2}(I))&{}{{\mathrm{Int_R}}}(S,I)\\ &{}&{}\ddots &{}&{}\\ 0&{}0&{}\ldots &{}{{\mathrm{Int_R}}}(\mathrm{T}_{2}(S), \mathrm{T}_{2}(I))&{}{{\mathrm{Int_R}}}(S,I)\\ 0&{}0&{}\ldots &{}0&{}{{\mathrm{Int_R}}}(S,I)\\ \end{pmatrix} \end{aligned}$$
- (2)\({{\mathrm{Int^{\ell }}}}_{\mathrm{T}_n(R)}(\mathrm{T}_n(S),\mathrm{T}_n(I))\)$$\begin{aligned} \quad =\begin{pmatrix} {{\mathrm{Int_R}}}(S,I)&{}{{\mathrm{Int_R}}}(S,I)&{}\ldots &{}{{\mathrm{Int_R}}}(S,I)&{}{{\mathrm{Int_R}}}(S,I)\\ 0&{}{{\mathrm{Int_R}}}(\mathrm{T}_{2}(S), \mathrm{T}_{2}(I))&{}\ldots &{}{{\mathrm{Int_R}}}(\mathrm{T}_{2}(S), \mathrm{T}_{2}(I))&{}{{\mathrm{Int_R}}}(\mathrm{T}_{2}(S), \mathrm{T}_{2}(I))\\ &{}&{}\ddots &{}&{}\\ 0&{}0&{}\ldots &{}{{\mathrm{Int_R}}}(\mathrm{T}_{n-1}(S), \mathrm{T}_{n-1}(I))&{}{{\mathrm{Int_R}}}(\mathrm{T}_{n-1}(S), \mathrm{T}_{n-1}(I))\\ 0&{}0&{}\ldots &{}0&{}{{\mathrm{Int_R}}}(\mathrm{T}_{n}(S), \mathrm{T}_{n}(I))\\ \end{pmatrix} \end{aligned}$$

## 2 Path polynomials and polynomials with scalar coefficients

We will use the combinatorial interpretation of the (*i*, *j*)-th entry in the *k*-th power of a matrix as a weighted sum of paths from *i* to *j*.

Consider a set *V* together with a subset *E* of \(V\times V\). We may regard the pair (*V*, *E*) as a set with a binary relation or as a directed graph. Choosing the interpretation as a graph, we may associate monomials to paths and polynomials to finite sets of paths.

For our purposes, a path of length \(k\ge 1\) from *a* to *b* (where \(a,b\in V\)) in a directed graph (*V*, *E*) is a sequence \(e_1e_2\ldots e_{k}\) of edges \(e_i=(a_i,b_i)\in E\) such that \(a_1=a\), \(b_{k}=b\) and \(b_{j}=a_{j+1}\) for \(1\le j<k\). Also, for each \(a\in V\), there is a path of length 0 from *a* to *a*, which we denote by \(\varepsilon _a\). (For \(a\ne b\) there is no path of length 0 from *a* to *b*.)

*R*.

To each edge \(e=(a, b)\) in *E*, we associate the variable \(x_{ab}\) and to each path \(e_1e_2\ldots e_{k}\) of length *k*, with \(e_i=(a_i,b_i)\), the monomial of degree *k* which is the product of the variables associated to the edges of the path: \(x_{a_1b_1}x_{a_2b_2}\ldots x_{a_kb_k}\). To each path of length 0 we associate the monomial 1.

If *E* is finite, or, more generally, if for any pair of vertices \(a,b\in V\) and fixed \(k\ge 0\) there are only finitely many paths in (*V*, *E*) from *a* to *b*, we define the *k*-th path polynomial from *a* to *b*, denoted by \(p_{a b}^{(k)}\), as the sum in *R*[*X*] of the monomials corresponding to paths of length *k* from *a* to *b*. If there is no path of length *k* from *a* to *b* in (*V*, *E*), we set \(p_{a b}^{(k)}=0\).

*i*,

*j*] and set of edges \(\{(a,b)\mid i\le a\le b\le j\}\).

Because of the transitivity of the relation “\(\le \)”, a path in \((\mathbb {N},\le )\) from *a* to *b* involves only vertices in the interval \([a,\, b]\). The path polynomial \(p_{a b}^{(k)}\), therefore, is the same whether we consider *a*, *b* as vertices in the graph \((\mathbb {N},\le )\), or any subgraph given by an interval [*i*, *j*] with \(a,b\in [i,j]\). So we may suppress all references to intervals and subgraphs and define:

### Definition 2.1

*R*be a commutative ring. The

*k*-th path polynomial from

*i*to

*j*(corresponding to the relation \((\mathbb {N},\le )\)) in

*R*[

*x*] is defined by

- (1)For \(1\le i\le j\) and \(k> 0\) by$$\begin{aligned} p_{ij}^{(k)}=\sum _{i=i_1\le i_2\le \ldots \le i_{k+1}=j} x_{i_1 i_2}x_{i_2 i_3}\ldots x_{i_{k-1}i_k}x_{i_k i_{k+1}}, \end{aligned}$$
- (2)
for \(1\le i \le j\) and \(k=0\) by \(p_{ij}^{(0)}=\delta _{ij}\),

- (3)
for \(i>j\) and all

*k*: \(p_{ij}^{(k)}=0\).

*a*to

*b*as

### Remark 2.2

Note that \(p_{ij}^{(k)}\) is the (*i*, *j*)-th entry of the *k*-th power of a generic upper triangular \(n\times n\) matrix (with \(n\ge i,j\)) whose (*i*, *j*)-th entry is \(x_{ij}\) when \(i\le j\) and zero otherwise.

### Example 2.3

In addition to right and left substitution of a matrix for the variable in a polynomial in *R*[*x*] or \((\mathrm{T}_n(R))[x]\), we are going to use another way of plugging matrices into polynomials, namely, into polynomials in \(R[X]=R[\{x_{ij}\mid i,j\in \mathbb {N}\}]\). For this purpose, the matrix \(C=(c_{ij})\in M_n(R)\) is regarded as a vector of elements of *R* indexed by \(\mathbb {N}\times \mathbb {N}\), with \(c_{ij}=0\) for \(i>n\) or \(j>n\):

### Definition 2.4

For a polynomial \(p\in R[X]=R[\{x_{ij}\mid i, j\in \mathbb {N}\}]\) and a matrix \(C=(c_{ij})\in M_n(R)\) we define *p*(*C*) as the result of substituting \(c_{ij}\) for those \(x_{ij}\) in *p* with \(i,j\le n\) and substituting 0 for all \(x_{kh}\) with \(k>n\) or \(h>n\).

To be able to describe the (*i*, *j*)-th entry in *f*(*C*), where \(f\in R[x]\), we need one more construction: for sequences of polynomials \(p=(p_i)_{i\ge 0}, q=(q_i)_{i\ge 0}\) in *R*[*X*], at least one of which is finite, we define a scalar product \(\langle p, q\rangle =\sum _i p_i q_i\). Actually, we only need one special instance of this construction, that where one of the sequences is the sequence of coefficients of a polynomial in *R*[*x*] and the other a sequence of path polynomials from *a* to *b*.

### Definition 2.5

*a*to

*b*as in Definition 2.1, we define

### Definition 2.6

For a polynomial \(p\in R[X]=R[\{x_{ij}\mid i,j\in \mathbb {N}\}]\) and \(S\subseteq R\), we define the image \(p(S^{*})\subseteq R\) as the set of values of *p* as the variables occurring in *p* range through *S* independently. (The star in \(S^{*}\) serves to remind us that the arguments of *p* are not elements of *S*, but *k*-tuples of elements of *S* for unspecified *k*.)

We define \({{\mathrm{Int}}}(S^{*},I)\) as the set of those polynomials in \(R[X]=R[\{x_{ij}\mid i,j\in \mathbb {N}\}]\) that take values in *I* whenever elements of *S* are substituted for the variables.

The notation \({{\mathrm{Int}}}(S^{*},I)\) is suggested by the convention that \({{\mathrm{Int}}}(S,I)\) consists of polynomials in one indeterminate mapping elements of *S* to elements of *I* and, for \(k\in \mathbb {N}\), \({{\mathrm{Int}}}(S^{k},I)\) consists of polynomials in *k* indeterminates mapping *k*-tuples of elements of *S* to *I*.

We summarize here the connection between path polynomials and the related constructions of Definitions 2.1, 2.4, 2.5 and 2.6 with entries of powers of matrices and entries of the image of a matrix under a polynomial function.

### Remark 2.7

*R*be a commutative ring, \(C\in \mathrm{T}_n(R)\), \(k\ge 0\), \(1\le i,j\le n\), and \(p_{ij}^{(k)}\) the

*k*-the path polynomial from

*i*to

*j*in

*R*[

*x*] as in Definition 2.1.

- (1)
\(\left[ C^k\right] _{i j} = p_{ij}^{(k)}(C)\)

- (2)
For \(f\in R[x]\), \(\left[ f(C)\right] _{i j} = \langle f, p_{i j}\rangle (C)\).

- (3)
If the

*i*-th row or the*j*-th column of*C*is zero then \(p_{ij}^{(k)}(C)=0\), and for all \(f\in R[x]\), \(\langle f, p_{i j}\rangle (C)=0\). - (4)
\(p_{ij}^{(k)}(S^{*})= \{p_{ij}^{(k)}(C)\mid C\in \mathrm{T}_n(S)\}.\)

### Proof

(1) and (2) follow immediately from Definitions 2.1, 2.4 and 2.5. Compare Remark 2.2. (3) follows from (2) and Definition 2.4, since every monomial occurring in \(p_{ij}^{(k)}\) involves a variable \(x_{im}\) for some *m* and a variable \(x_{hj}\) for some *h*. Also, (4) follows from Definitions 2.1 and 2.4.

### Lemma 2.8

*S*for the variables depends only on

*f*and \(j-i\), that is, for all \(i\le j\) and all \(m\in \mathbb {N}\)

### Proof

*R*-algebra isomorphism

Applying \(\psi \) amounts to a renaming of variables; it doesn’t affect the image of the polynomial function resulting from substituting elements of *S* for the variables. \(\square \)

### Proposition 2.9

- (1)
\(f\in {{\mathrm{Int}}}_R(T_n(S), T_n(I))\)

- (2)
\(\forall \; 1\le i\le j\le n\quad \langle f, p_{ij}\rangle \in {{\mathrm{Int}}}(S^{*},I)\)

- (3)
\(\forall \; 0\le k\le n-1\quad \exists i\in \mathbb {N}\; \langle f, p_{i\> i+k}\rangle \in {{\mathrm{Int}}}(S^{*},I)\)

### Proof

The (*i*, *j*)-th entry of *f*(*C*), for \(C\in \mathrm{T}_n(R)\), is \(\langle f, p_{ij}\rangle \)(C), by Remark 2.7 (2). If *C* varies through \(\mathrm{T}_n(S)\), then all variables occurring in \(\langle f, p_{ij}\rangle \) vary through *S* independently. This shows the equivalence of (1) and (2).

By Lemma 2.8, the image of \(\langle f, p_{ij}\rangle \) as the variables range through *S* depends only on *f* and \(j-i\). This shows the equivalence of (2) and (3). \(\square \)

## 3 Lemmata for polynomials with matrix coefficients

### Notation 3.1

*C*by replacing all entries with row- or column-index outside the interval \([h,j]=\{m\in \mathbb {N}\mid h\le m\le j\}\) by zeros; and for \(S\subseteq R\), let

### Remark 3.2

*k*.

We derive some technical, but useful, formulas for the (*i*, *j*)-th entry in *f*(*C*) and \({f(C)_{\ell }}\), respectively, where \(f\in (T_n(R))[x]\) and \(C\in \mathrm{T}_n(R)\).

### Lemma 3.3

- (R)and also$$\begin{aligned}{}[f(C)]_{i\,j} = \sum _{h\in [i, j]} [f_{ih}(C)]_{h\,j} = \sum _{h\in [i, j]} [f_{ih}(C^{[h,j]})]_{h\,j} \end{aligned}$$$$\begin{aligned} \left[ f(C)\right] _{i\,j} = \sum _{h\in [i, j]} \langle f_{ih}, p_{hj}\rangle (C) = \sum _{h\in [i, j]} \langle f_{ih}, p_{hj}\rangle (C^{[h,j]}) . \end{aligned}$$
- (L)and also$$\begin{aligned}{}[{f(C)_{\ell }}]_{i\,j} = \sum _{h\in [i, j]} [f_{hj}(C)]_{i\,h} = \sum _{h\in [i, j]} [f_{hj}(C^{[i,h]})]_{i\,h} \end{aligned}$$$$\begin{aligned} \left[ {f(C)_{\ell }}\right] _{i\,j} = \sum _{h\in [i, j]} \langle f_{hj}, p_{ih}\rangle (C) = \sum _{h\in [i, j]} \langle f_{hj}, p_{ih}\rangle (C^{[i,h]}). \end{aligned}$$

### Proof

*C*by \(C^{[h,j]}\), the matrix obtained from

*C*by replacing all entries with row or column index outside the interval [

*h*,

*j*] by zeros. Therefore,

*C*for the variable of

*f*to the left of the coefficients,

*C*by \(C^{[i,h]}\), the matrix obtained from

*C*by replacing all entries in rows and columns with index outside [

*i*,

*h*] by zeros. Therefore,

### Lemma 3.4

- [right:]The following are equivalent
- (1)
\(\left[ f(C)\right] _{i\,j}\in I\) for all \(C\in \mathrm{T}_n(S)\)

- (2)
\(\left[ f_{ih}(C)\right] _{h\,j}\in I\) for all \(C\in \mathrm{T}_n(S)\), for all \(h\in [i,j]\).

- (3)
\(\langle f_{ih}, p_{hj}\rangle \in {{\mathrm{Int_R}}}(S^{*},I)\) for all \(h\in [i,j]\).

- (1)
- [left:]The following are equivalent
- (1)
\(\left[ {f(C)_{\ell }}\right] _{i\,j}\in I\) for all \(C\in \mathrm{T}_n(S)\)

- (2)
\([f_{hj}(C)]_{i\,h}\in I\) for all \(C\in \mathrm{T}_n(S)\), for all \(h\in [i,j]\).

- (3)
\(\langle f_{hj}, p_{ih}\rangle \in {{\mathrm{Int_R}}}(S^{*},I)\) for all \(h\in [i,j]\).

- (1)

### Proof

**right**substitution: (2 \(\Rightarrow \) 1) follows directly from

(1 \(\Rightarrow \) 2) Induction on *h* from *j* down to *i*. Given \(m\in [i,j]\), we show (2) for \(h=m\), assuming that the statement holds for all values \(h\in [i,j]\) with \(h>m\). In the above formula from Lemma 3.3, we let *C* vary through \(T_n^{[{m},{j}]}(S)\) (as in Notation 3.1). For such a *C*, the summands \([f_{ih}(C)]_{h\,j}\) with \(h<m\) are zero, by Remark 2.7 (3). The summands with \(h>m\) are in *I*, by induction hypothesis. Therefore \(\left[ f(C)\right] _{i\,j}\in I\) for all \(C\in \mathrm{T}_n(S)\) implies \([f_{im}(C)]_{m\,j}\in I\) for all \(C\in T_n^{[{m},{j}]}(S)\). Since \(\left[ f_{im}(C)\right] _{m\,j}=[f_{im}(C^{[m,j]})]_{m\,j}\), by Remark 3.2, the statement follows for all \(C\in \mathrm{T}_n(S)\).

Finally, (3 \(\Leftrightarrow \) 2) because, firstly, \(\left[ f_{ih}(C)\right] _{h\,j}= \langle f_{ih}, p_{hj}\rangle (C)\), for all \(C\in \mathrm{T}_n(R)\), and secondly, as far as the image of \(\langle f_{ih}, p_{hj}\rangle \) is concerned, all possible values under substitution of elements from *S* for the variables are obtained as *C* varies through \(\mathrm{T}_n(S)\), because no variables other that \(x_{ij}\) with \(1\le i\le j\le n\) occur in this polynomial.

**left**substitution: (2 \(\Rightarrow \) 1) follows directly from

(1 \(\Rightarrow \) 2) Induction on *h*, from \(h=i\) to \(h=j\). Given \(m\in [i,j]\), we show (2) for \(h=m\) under the hypothesis that it holds for all \(h\in [i,j]\) with \(h<m\). In the above formula from Lemma 3.3, the summands corresponding to \(h<m\) are in *I* by induction hypothesis. If we let *C* vary through \(T_n^{[{i},{m}]}(S)\) (as in 3.1), then, for such *C*, the summands \([f_{hj}(C)]_{i\,h}=\langle f_{hj},p_{ih}\rangle (C)\) corresponding to \(h>m\) are zero, by Remark 2.7 (3). Therefore \([f_{mj}(C^{[i,m]})]_{i\,m}\in I\) for all \(C\in \mathrm{T}_n(S)\). But \([f_{mj}(C^{[i,m]})]_{i\,m}= [f_{mj}(C)]_{i\,m}\) for all \(C\in \mathrm{T}_n(S)\), by Remark 3.2. Thus, (2) follows by induction.

Finally, (3 \(\Leftrightarrow \) 2) because, firstly, \([f_{hj}(C)]_{i\,h}= \langle f_{hj}, p_{ih}\rangle (C)\), for all \(C\in \mathrm{T}_n(R)\), and secondly, as far as the image of \(\langle f_{hj}, p_{ih}\rangle \) is concerned, all possible values under substitution of elements of *S* for the variables are realized as *C* varies through \(\mathrm{T}_n(S)\), since no variables other than \(x_{ij}\) with \(1\le i\le j\le n\) occur in this polynomial. \(\square \)

## 4 Results for polynomials with matrix coefficients

*i*and \(f\in (\mathrm{T}_n(R))[x]\), whether the entries of the

*i*-th row of

*f*(

*C*) are in

*I*for every \(C\in \mathrm{T}_n(S)\), depends only on the

*i*-th rows of the coefficients of

*f*. Indeed, if \(f=\sum _k F_k x^k\), and \(\left[ B\right] _i\) denotes the

*i*-th row of a matrix

*B*, then

*j*, whether the entries of the

*j*-th column of \({f(C)_{\ell }}\) are in

*I*for every \(C\in \mathrm{T}_n(S)\) depends only on the

*j*-th columns of the coefficients of \(f=\sum _k F_k x^k\): if \(\left[ B\right] ^j\) denotes the

*j*-th column of a matrix

*B*, then

*i*-th rows, or

*j*-th columns, of the coefficients of \(f\in (\mathrm{T}_n(R))[x]\) have to satisfy to guarantee that the entries of the

*i*-th row of

*f*(

*C*), or the

*j*-th column of \({f(C)_{\ell }}\), respectively, are in

*I*for every \(C\in \mathrm{T}_n(S)\).

### Lemma 4.1

- [right:]Let \(1\le i\le n\). The following are equivalent
- (1)
For every \(C\in \mathrm{T}_n(S)\), all entries of the

*i*-th row of*f*(*C*) are in*I*. - (2)
For all \(C\in \mathrm{T}_n(S)\), for all

*h*,*j*with \(i\le h\le j\le n\), \(\left[ f_{ih}(C)\right] _{h\,j}\in I\). - (3)
For all

*h*,*j*with \(i\le h\le j\le n\), \(\langle f_{ih}, p_{hj}\rangle \in {{\mathrm{Int_R}}}(S^{*},I)\). - (4)
\(f_{ih}\in {{\mathrm{Int_R}}}(\mathrm{T}_{n-h+1}(S), \mathrm{T}_{n-h+1}(I))\) for \(h=i,\ldots , n\).

- (1)
- [left:]Let \(1\le j\le n\). The following are equivalent
- (1)
For every \(C\in \mathrm{T}_n(S)\), all entries of the

*j*-th column of \({f(C)_{\ell }}\) are in*I*. - (2)
For all \(C\in \mathrm{T}_n(S)\), for all

*i*,*h*with \(1\le i\le h\le j\), \([f_{hj}(C)]_{i\,h}\in I\). - (3)
For all

*i*,*h*with \(1\le i\le h\le j\), \(\langle f_{hj}, p_{ih}\rangle \in {{\mathrm{Int_R}}}(S^{*},I)\). - (4)
\(f_{hj}\in {{\mathrm{Int_R}}}(\mathrm{T}_{h}(S), \mathrm{T}_{h}(I))\) for \(h=1,\ldots , j\).

- (1)

### Proof

For **right** substitution: by Lemma 3.4, (1–3) are equivalent conditions for the (*i*, *j*)-th entry of *f*(*C*) to be in *I* for all *j* with \(i\le j\le n\), for every \(C\in \mathrm{T}_n(S)\).

(3\(\Rightarrow \)4) For each *h* with \(i\le h\le n\), letting *j* vary from *h* to *n* shows that criterion (3) of Proposition 2.9 for \(f_{ih}\in {{\mathrm{Int_R}}}(\mathrm{T}_{n-h+1}(S), \mathrm{T}_{n-h+1}(I))\) is satisfied.

(4\(\Rightarrow \)3) For each fixed *h*, \(f_{ih}\) satisfies, by criterion (2) of Proposition 2.9, in particular, \(\langle f_{ih}, p_{1k}\rangle \in {{\mathrm{Int}}}(S^{*},I)\) for all \(1\le k\le n-h+1\). By Lemma 2.8, \(\langle f_{ih}, p_{1k}\rangle (S^{*})\) equals \(\langle f_{ih}, p_{h\, h+k-1}\rangle (S^{*})\). Therefore \(\langle f_{ih}, p_{h j}\rangle \in {{\mathrm{Int}}}(S^{*},I)\) for all *j* with \(h\le j\le n\).

For **left** substitution: by Lemma 3.4, (1-3) are equivalent conditions for the (*i*, *j*)-th entry of \({f(C)_{\ell }}\) to be in *I* for all *i* with \(1\le i\le j\), for every \(C\in \mathrm{T}_n(S)\).

(3\(\Rightarrow \)4) For each *h* with \(1\le h\le j\), letting *i* range from 1 to *h* shows that criterion (3) of Proposition 2.9 for \(f_{hj}\in {{\mathrm{Int_R}}}(\mathrm{T}_{h}(S), \mathrm{T}_{h}(I))\) is satisfied.

(4\(\Rightarrow \)3) For each *h* with \(1\le h\le j\), applying criterion (2) of Proposition 2.9 to \(f_{hj}\) shows, in particular, \(\langle f_{hj}, p_{ih}\rangle \in {{\mathrm{Int}}}(S^{*},I)\) for all \(1\le i\le h\). \(\square \)

We are now ready to prove the promised characterization of polynomials with matrix coefficients in terms of polynomials with scalar coefficients:

### Theorem 4.2

*f*and set

### Proof

The criterion for \(f\in {{\mathrm{Int}}}_{\mathrm{T}_n(R)}(\mathrm{T}_n(S),\mathrm{T}_n(I))\) is just Lemma 4.1 applied to each row-index *i* of the coefficients of \(f\in (T_n(R))[x]\), and the criterion for \(f\in {{\mathrm{Int^{\ell }}}}_{\mathrm{T}_n(R)}(\mathrm{T}_n(S),\mathrm{T}_n(I))\) is Lemma 4.1 applied to each column index *j* of the coefficients of *f*.

With the above proof we have also shown Remark 1.6 from the introduction, which is the representation in matrix form of Theorem 4.2.

## 5 Applications to null-polynomials and integer-valued polynomials

*R*that induce the zero function on \(\mathrm{T}_n(R)\). As before, we denote substitution for the variable in a polynomial \(f=\sum _k b_k x^k\) in \(\mathrm{T}_n(R)[x]\), to the right or to the left of the coefficients, as

*D*is a domain with quotient field

*K*and \(f\in (\mathrm{T}_n(K))[x]\), we may represent

*f*as

*g*/

*d*with \(d\in D\) and \(g\in (\mathrm{T}_n(D))[x]\). Then

*f*is (right) integer-valued on \(\mathrm{T}_n(D)\) if and only if the residue class of

*g*in \(T_n(D/dD)[x]\) is a (right) null-polynomial on \(T_n(D/dD)\) [3].

From Theorem 4.2 we derive the following corollary (see also Remark 1.6):

### Corollary 5.1

If we identify polynomials in \((\mathrm{T}_n(R))[x]\) with their images in \(T_n(R[x])\) under the isomorphism of Remark 1.1, then

This allows us to conclude:

### Theorem 5.2

Let *R* be a commutative ring. The set \(N_{\mathrm{T}_n(R)}(\mathrm{T}_n(R))\) of right null-polynomials on \(\mathrm{T}_n(R)\) with coefficients in \((\mathrm{T}_n(R))[x]\), and the set \(N^{\ell }_{\mathrm{T}_n(R)}(\mathrm{T}_n(R))\) of left null-polynomials on \(\mathrm{T}_n(R)\) with coefficients in \(\mathrm{T}_n(R)\), are ideals of \((\mathrm{T}_n(R))[x]\).

### Proof

Note that \(\mathrm {N}_{R}(T_{m}(R))\subseteq \mathrm {N}_{R}(T_{k}(R))\) for \(m\ge k\). Also, \(\mathrm {N}_{R}(T_{m}(R)) R[x]\subseteq \mathrm {N}_{R}(T_{m}(R))\) and \(R[x]\mathrm {N}_{R}(T_{m}(R)) \subseteq \mathrm {N}_{R}(T_{m}(R))\) by substitution homomorphism for polynomials with coefficients in the commutative ring *R*. This observation together with matrix multiplication shows that the image of \(N_{\mathrm{T}_n(R)}(\mathrm{T}_n(R))\) in \(T_n(R[x])\) under the ring isomorphism from \((\mathrm{T}_n(R))[x]\) to \(T_n(R[x])\) is an ideal of \(T_n(R[x])\), and likewise the image of \(N^{\ell }_{\mathrm{T}_n(R)}(\mathrm{T}_n(R))\) in \(T_n(R[x])\). \(\square \)

*D*be a domain with quotient field

*K*. Applying Theorem 4.2 to integer-valued polynomials (see also Remark 1.6)

### Corollary 5.3

If we identify polynomials in \((\mathrm{T}_n(K))[x]\) with their images in \(T_n(K[x])\) under the isomorphism of Remark 1.1, then

We can see the following:

### Theorem 5.4

Let *D* be a domain with quotient field *K*. The set \({{\mathrm{Int}}}_{\mathrm{T}_n(K)}(\mathrm{T}_n(D))\) of right integer-valued polynomials with coefficients in \((\mathrm{T}_n(K))[x]\) and the set \({{\mathrm{Int^{\ell }}}}_{\mathrm{T}_n(K)}(\mathrm{T}_n(D))\) of left integer-valued polynomials with coefficients in \((\mathrm{T}_n(K))[x]\) are subrings of \((\mathrm{T}_n(K))[x]\).

### Proof

*K*, this implies

## Notes

### Acknowledgements

Open access funding provided by Graz University of Technology.

### References

- 1.Evrard, S., Fares, Y., Johnson, K.: Integer valued polynomials on lower triangular integer matrices. Monatsh. Math.
**170**, 147–160 (2013)MathSciNetCrossRefMATHGoogle Scholar - 2.Frisch, S.: Integrally closed domains, minimal polynomials, and null ideals of matrices. Commun. Algebra
**32**, 2015–2017 (2004)MathSciNetCrossRefMATHGoogle Scholar - 3.Frisch, S.: Polynomial separation of points in algebras. In: Arithmetical Properties of Commutative Rings and Monoids. Lecture Notes Pure Appl. Math., vol 241. Chapman & Hall/CRC, Boca Raton, pp 253–259 (2005)Google Scholar
- 4.Frisch, S.: Integer-valued polynomials on algebras. J. Algebra
**373**, 414–425 (2013). (see also Corrigendum 412, p 282 (2014))Google Scholar - 5.Loper, K.A., Werner, N.J.: Generalized rings of integer-valued polynomials. J. Number Theory
**132**, 2481–2490 (2012)MathSciNetCrossRefMATHGoogle Scholar - 6.Peruginelli, G.: Integer-valued polynomials over matrices and divided differences. Monatsh. Math.
**173**, 559–571 (2014)MathSciNetCrossRefMATHGoogle Scholar - 7.Peruginelli, G., Werner, N.: Decomposition of integer-valued polynomial algebras (2016).
**(preprint)**. arXiv:1604.08337 - 8.Peruginelli, G., Werner, N.: Non-triviality conditions for integer-valued polynomial rings on algebras. Monatsh. Math. (2016).
**(to appear, preprint)**. arXiv:1604.08337 - 9.Peruginelli, G., Werner, N.J.: Properly integral polynomials over the ring of integer-valued polynomials on a matrix ring. J. Algebra
**460**, 320–339 (2016)MathSciNetCrossRefMATHGoogle Scholar - 10.Rissner, R.: Null ideals of matrices over residue class rings of principal ideal domains. Linear Algebra Appl.
**494**, 44–69 (2016)MathSciNetCrossRefMATHGoogle Scholar - 11.Werner, N.J.: Integer-valued polynomials over matrix rings. Commun. Algebra
**40**, 4717–4726 (2012)MathSciNetCrossRefMATHGoogle Scholar - 12.Werner, N.J.: Polynomials that kill each element of a finite ring. J. Algebra Appl.
**13**, 1350111 (2014)MathSciNetCrossRefMATHGoogle Scholar

## Copyright information

**Open Access**This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.