1 Introduction

In the area of Algebraic Combinatorics there is an object called a commutative association scheme [2, 13]. This is a combinatorial generalization of a finite group, that retains enough of the group structure so that one can still speak of the character table. During the decade 1970–1980 it was realized by E. Bannai, P. Delsarte, D. Higman and others that commutative association schemes help to unify many aspects of group theory, coding theory, and design theory. An early work in this direction was the 1973 thesis of Delsarte [5]. This thesis helped to motivate the work of Bannai [2, p. i], who taught a series of graduate courses on commutative association schemes during September 1978–December 1982 at the Ohio State University. The lecture notes from those courses, along with more recent developments, became a book coauthored with T. Ito [2]. The book had a large impact; it is currently cited 698 times according to MathSciNet.

In the Introduction to [2], Bannai and Ito describe the goals of their book. One goal was to summarize what is known about commutative association schemes up to that time. Another goal was to focus the reader’s attention on two remarkable types of schemes, said to be P-polynomial and Q-polynomial. A P-polynomial scheme is essentially the same thing as a distance-regular graph, and can be viewed as a finite analog of a 2-point homogeneous space [25]. Similarly, a Q-polynomial scheme is a finite analog of a rank 1 symmetric space [25]. By a theorem of H. C. Wang [25], a compact Riemannian manifold is 2-point homogeneous if and only if it is rank 1 symmetric. This result was extended to the noncompact case by J. Tits [24] and S. Helgason [8]. Motivated by all this, Bannai and Ito conjectured that a primitive association scheme is P-polynomial if and only if it is Q-polynomial, provided that the diameter is sufficiently large [2, p. 312]. They also proposed the classification of schemes that are both P-polynomial and Q-polynomial [2, p. xiii].

Progress on the proposed classification was made while the book was still in preparation. A P-polynomial scheme gets its name from the fact that there exists a sequence of orthogonal polynomials \(\lbrace u_i\rbrace _{i=0}^d\) such that \(u_i(A)=A_i/k_i\) for \(0 \le i \le d\), where d is the diameter of the scheme, \(A_i\) is the ith associate matrix, \(A=A_1\), and \(k_i\) is the ith valency [2, pp. 190, 261]. Similarly, for a Q-polynomial scheme there exists a sequence of orthogonal polynomials \(\lbrace u^*_i\rbrace _{i=0}^d\) such that \(u^*_i(A^*)=A^*_i/k^*_i\) for \(0 \le i \le d\), where \(A^*_i\) is the ith dual associate matrix, \(A^*=A^*_1\), and \(k^*_i\) is the ith dual valency [2, pp. 193, 261], [15, p. 384]. For schemes that are P-polynomial and Q-polynomial, we have \(u_i(\theta _j)= u^*_j(\theta ^*_i) \) for \(0 \le i,j\le d\), where \(\lbrace \theta _i\rbrace _{i=0}^d\) (resp. \(\lbrace \theta ^*_i\rbrace _{i=0}^d\) ) are the eigenvalues of A (resp. \(A^*\)) [2, p. 262]. These equations are known as Delsarte duality [12] or Askey-Wilson duality [20, p. 261]. This duality can be defined for \(d=\infty \), but throughout this paper we assume that d is finite.

Askey-Wilson duality comes up naturally in the area of special functions and orthogonal polynomials. In this area the classical orthogonal polynomials are often described using a partially-ordered set called the Askey-tableau. The vertices in the poset represent the various families of classical orthogonal polynomials, and the covering relation describes what happens when a limit is taken. See [11] for an early version of the tableau, and [10, pp. 183, 413] for a more recent version. One branch of the tableau, sometimes called the terminating branch, contains the polynomials that are orthogonal with respect to a measure that is nonzero at finitely many arguments. At the top of this terminating branch sit the q-Racah polynomials, introduced in 1979 by R. Askey and J. Wilson [1]. The rest of the terminating branch consists of the q-Hahn, dual q-Hahn, q-Krawtchouk, dual q-Krawtchouk, quantum q-Krawtchouk, affine q-Krawtchouk, Racah, Hahn, dual-Hahn, and Krawtchouk polynomials. The above-named polynomials are defined using hypergeometric series or basic hypergeometric series, and it is transparent from the definition that they satisfy Askey-Wilson duality.

Back at Ohio State, there was a graduate student attending Bannai’s classes by the name of Douglas Leonard. With Askey’s encouragement, Leonard showed in [12] that the q-Racah polynomials give the most general orthogonal polynomial system that satisfies Askey-Wilson duality, under the assumption that \(d\ge 9\). In [2, Theorem 5.1] Bannai and Ito give a version of Leonard’s theorem that removes the assumption on d and explicitly describes all the limiting cases that show up. This version gives a complete classification of the orthogonal polynomial systems that satisfy Askey-Wilson duality. It shows that the orthogonal polynomial systems that satisfy Askey-Wilson duality are from the terminating branch of the Askey-tableau, except for one family with \(q=-1\) now called the Bannai-Ito polynomials [2, p. 271], [21, Example 5.14]. In our view, the terminating branch of the Askey-tableau should include the Bannai-Ito polynomials. Adopting this view, for the rest of this paper we include the Bannai-Ito polynomials in the terminating branch of the Askey-tableau.

The Leonard theorem [2, Theorem 5.1] is notoriously complicated; the statement alone takes 11 pages. In an effort to simplify and clarify the theorem, the present author introduced the notion of a Leonard pair [18, Definition 1.1] and Leonard system [18, Definition 1.4]. Roughly speaking, a Leonard pair consists of two diagonalizable linear transformations on a finite-dimensional vector space, each acting on the eigenspaces of the other one in an irreducible tridiagonal fashion; see Definition 2.1 below. A Leonard system is essentially a Leonard pair, together with appropriate orderings of their eigenspaces; see Definition 2.3 below. In [18, Theorem 1.9] the Leonard systems are classified up to isomorphism. This classification is related to Leonard’s theorem as follows. In [18, Appendix A] and [20, Sect. 19], a bijection is given between the isomorphism classes of Leonard systems over \({\mathbb {R}}\), and the orthogonal polynomial systems that satisfy Askey-Wilson duality. Given the bijection, the classification [18, Theorem 1.9] becomes a ‘linear-algebraic version’ of Leonard’s theorem. This version is conceptually simple and quite elegant in our view. In [23] we start with the Leonard pair axiom and derive, in a uniform and attractive manner, the polynomials in the terminating branch of the Askey-tableau, along with their properties such as the 3-term recurrence, difference equation, Askey-Wilson duality, and orthogonality.

We comment on how the theory of Leonard pairs and Leonard systems depends on the choice of ground field. The classification [18, Theorem 1.9] shows that the ground field does not matter in a substantial way, unless it has characteristic 2. In this case, the theory admits an additional family of polynomials called the orphans. The orphans have diameter \(d=3\) only; they are described in [21, Example 5.15] and Example 20.13 below.

The book [2] appeared in 1984 and the paper [18] appeared in 2001. It is natural to ask what happened in between. The concept of a Leonard pair over \({\mathbb {R}}\) appears in [14, Definitions 1.1, 1.2], where it is called a thin Leonard pair. Also appearing in [14] is the correspondence between Leonard pairs over \({\mathbb {R}}\) and the orthogonal polynomial systems that satisfy Askey-Wilson duality. In addition [14, Definition 3.1] describes an algebra called the Leonard algebra, now known as the Askey-Wilson algebra. The paper [14] was submitted but never published. In [26] A. Zhedanov introduced the Askey-Wilson algebra. This algebra has a presentation involving two generators \(K_1\), \(K_2\) that satisfy a pair of quadratic relations. In [6], Granovskii, Lutzenko, and Zhedanov consider a finite-dimensional irreducible module for the Askey-Wilson algebra, on which each of \(K_1\), \(K_2\) are diagonalizable. Under some minor assumptions, they show that each of \(K_1\), \(K_2\) acts in an irreducible tridiagonal fashion on an eigenbasis for the other one. In hindsight, it is fair to say that they constructed an example of a Leonard pair, although they did not define a Leonard pair as an independent concept. The paper [15, Theorem 2.1] contains a version of the Leonard pair concept that is close to [18, Definition 1.1].

Turning to the present paper, we obtain two main results: (i) an improved proof for the classification of Leonard systems; (ii) a comprehensive description of the intersection numbers of a Leonard system.

We now describe our main results in detail. In [18, Theorem 1.9] the Leonard systems are classified up to isomorphism, and the given proof is completely correct as far as we know. However the proof is longer than necessary. In the roughly two decades since the paper was published, we have discovered some ‘shortcuts’ that simplify the proof and avoid certain tedious calculations. The shortcuts are summarized as follows.

  • In [18, Sect. 3] we established the split canonical form for a Leonard system. In the present paper we make use of the fact that the split canonical form still exists under weaker assumptions; these are described in Proposition 7.6 below.

  • The concept of a normalizing idempotent was introduced by Edward Hanson in [7, Sect. 6]. In the present paper we use this concept to simplify numerous arguments; see Sects. 6, 7, 17 below.

  • In [18, Theorem 4.8] we explicitly gave the matrix entries for a certain matrix representation of the primitive idempotents for a Leonard system. The computation of these entries is tedious and takes up most of [18, Sect. 4]. In the present paper we replace all of this by a single identity (15) that is established in a few lines.

  • In the present paper we use an antiautomorphism \(\dagger \) to obtain the result Proposition 4.4, which is roughly summarized as follows: as we construct a Leonard system, if we construct three-fourths of it then the last fourth comes for free.

  • In [18, Lemma 7.2] we used a slightly obscure method to establish the irreducibility of the underlying module for a Leonard system. In the present paper this lemma is avoided using the first bullet point above.

  • We replace the slightly technical results [18, Lemmas 10.3–10.5] by a more elementary result, Proposition 13.4 below.

  • We replace most of [18, Sect. 11] by a single result Proposition 8.4 called the wrap-around result. The wrap-around result was discovered by T. Ito and the present author during our effort to classify the tridiagonal pairs; it is the essential idea behind the proof of [9, Lemma 9.9].

  • Using the improvements listed above, we replace the arguments in [18, Sects. 1314] with more efficient arguments in Sect. 17 below.

Some parts of the improved proof are unchanged from the original; we still use [18, Sect. 8, 9] and [18, Lemmas 10.2, 12.4]. These results are reproduced in the present paper in order to obtain a complete proof, all in one place. We believe that this complete proof is suitable for The Book if not this journal.

Concerning our second main result, we mentioned earlier that the Leonard systems correspond to the orthogonal polynomial sequences that satisfy Askey-Wilson duality. Unfortunately, it is a bit difficult to go back and forth between the two points of view, because from the polynomial perspective, the main parameters are the intersection numbers (or connection coefficients) that describe the 3-term recurrence, and from the Leonard system perspective, the main parameters are the first and second split sequence that make up part of the parameter array. There are some equations that relate the two types of parameters; see [20, Theorem 17.7] and Lemma 19.4 below. However the nonlinear nature of these equations makes them difficult to use. In order to mitigate the difficulty, we display many identities that involve the intersection numbers along with the first and second split sequence. Taken together, these identities should make it easier to work with the intersection numbers in the future. These identities can be found in Sect. 19. We also explicitly give the intersection numbers for every isomorphism class of Leonard system; these are contained in the Appendix.

The paper is organized as follows. Sects. 2, 3 contain preliminary comments and definitions. In Sect. 4 we describe the antiautomorphism \(\dagger \) and use it to obtain Proposition 4.4. In Sect. 5 we describe some polynomials that will be used throughout the paper. In Sect. 6 we discuss the concept of a normalizing idempotent. In Sect. 7 we use normalizing idempotents to describe certain kinds of decompositions relevant to Leonard systems. Section 8 contains the wrap-around result. In Sect. 9 we recall the parameter array of a Leonard system. In Sect. 10 we state the Leonard system classification, which is Theorem 10.1. Sections 11, 12, 13 are about recurrent sequences. In Sect. 14 we define a polynomial in two variables that will be useful on several occasions later in the paper. Sections 15, 16 are about the tridiagonal relations. In Sect. 17 we complete the proof of Theorem 10.1. Section 18 contains two characterizations related to Leonard systems and parameter arrays. In Sect. 19 we give a comprehensive treatment of the intersection numbers of a Leonard system. These intersection numbers are listed in the Appendix.

2 Preliminaries

We now begin our formal argument. Shortly we will define a Leonard pair and Leonard system. Before we get into detail, we briefly review some notation and basic concepts. Let \({\mathbb {F}}\) denote a field. Every vector space and algebra discussed in this paper is understood to be over \({\mathbb {F}}\). Throughout the paper fix an integer \(d\ge 0\). Let \(\mathrm{Mat}_{d+1}({\mathbb {F}})\) denote the algebra consisting of the \(d+1\) by \(d+1\) matrices that have all entries in \({\mathbb {F}}\). We index the rows and columns by \(0,1,\ldots , d\). Throughout the paper V denotes a vector space with dimension \(d+1\). Let \(\mathrm{End}(V)\) denote the algebra consisting of the \({\mathbb {F}}\)-linear maps from V to V. Next we recall how each basis \(\lbrace v_i\rbrace _{i=0}^d\) of V gives an algebra isomorphism \(\mathrm{End}(V) \rightarrow \mathrm{Mat}_{d+1}({\mathbb {F}})\). For \(X \in \mathrm{End}(V)\) and \(M \in \mathrm{Mat}_{d+1}({\mathbb {F}})\), we say that M represents X with respect to \(\lbrace v_i\rbrace _{i=0}^d\) whenever \(Xv_j = \sum _{i=0}^d M_{ij}v_i\) for \(0 \le j \le d\). The isomorphism sends X to the unique matrix in \(\mathrm{Mat}_{d+1}({\mathbb {F}})\) that represents X with respect to \(\lbrace v_i\rbrace _{i=0}^d\). A matrix \(M \in \mathrm{Mat}_{d+1}({\mathbb {F}})\) is called tridiagonal whenever each nonzero entry lies on either the diagonal, the subdiagonal, or the superdiagonal. Assume that M is tridiagonal. Then M is called irreducible whenever each entry on the subdiagonal is nonzero, and each entry on the superdiagonal is nonzero.

Definition 2.1

(See [18, Definition 1.1]). By a Leonard pair on V we mean an ordered pair \(A, A^*\) of elements in \(\mathrm{End}(V)\) such that:

  1. (i)

    there exists a basis for V with respect to which the matrix representing A is diagonal and the matrix representing \(A^*\) is irreducible tridiagonal;

  2. (ii)

    there exists a basis for V with respect to which the matrix representing \(A^*\) is diagonal and the matrix representing A is irreducible tridiagonal.

The Leonard pair \(A,A^*\) is said to be over \({\mathbb {F}}\) and have diameter d.

Note 2.2

According to a common notational convention, \(A^*\) denotes the conjugate-transpose of A. We are not using this convention. In a Leonard pair A, \(A^*\) the linear transformations A and \(A^*\) are arbitrary subject to (i), (ii) above.

When working with a Leonard pair, it is convenient to consider a closely related object called a Leonard system. Before defining a Leonard system, we recall a few concepts from linear algebra. An element \(A \in \mathrm{End}(V)\) is said to be diagonalizable whenever V is spanned by the eigenspaces of A. The element A is called multiplicity-free whenever A is diagonalizable and each eigenspace of A has dimension one. Note that A is multiplicity-free if and only if A has \(d+1\) mutually distinct eigenvalues in \({\mathbb {F}}\). Assume that A is multiplicity-free, and let \(\lbrace V_i\rbrace _{i=0}^d\) denote an ordering of the eigenspaces of A. For \(0 \le i \le d\) let \(\theta _i\) denote the eigenvalue of A for \(V_i\). For \(0 \le i \le d\) define \(E_i \in \mathrm{End}(V)\) such that \((E_i - I)V_i = 0 \) and \(E_i V_j=0\) if \(j\not =i\) \((0 \le j \le d)\). We call \(E_i\) the primitive idempotent of A for \(V_i\) (or \(\theta _i\)). We have (i) \(E_i E_j = \delta _{i,j} E_i\) \((0 \le i,j\le d)\); (ii) \(I = \sum _{i=0}^d E_i\); (iii) \(AE_i = \theta _i E_i = E_i A\) \((0 \le i \le d)\); (iv) \(A = \sum _{i=0}^d \theta _i E_i\); (v) \(V_i = E_iV\) \((0 \le i \le d)\); (vi) \(\mathrm{rank}(E_i) = 1\) \((0 \le i \le d)\), (vii) \(\mathrm{tr}(E_i) = 1\) \((0 \le i \le d)\), where tr means trace. Moreover

$$\begin{aligned} E_i=\prod _{{\mathop {j \ne i}\limits ^{0 \le j \le d}}} \frac{A-\theta _j I}{\theta _i-\theta _j} \qquad \qquad (0 \le i \le d). \end{aligned}$$
(1)

Let \({\mathcal {D}}\) denote the subalgebra of \(\mathrm{End}(V)\) generated by A. The elements \(\lbrace A^i\rbrace _{i=0}^d\) form a basis for \({\mathcal {D}}\), and \(\prod _{i=0}^d (A-\theta _i I) = 0\). Moreover \(\lbrace E_i\rbrace _{i=0}^d\) form a basis for \({\mathcal {D}}\).

Definition 2.3

(See [18, Definition 1.4]). By a Leonard system on V, we mean a sequence

$$\begin{aligned} \Phi =(A; \lbrace E_i\rbrace _{i=0}^d; A^*; \lbrace E^*_i\rbrace _{i=0}^d) \end{aligned}$$
(2)

of elements in \(\mathrm{End}(V)\) that satisfy (i)–(v) below:

  1. (i)

    each of A, \(A^*\) is multiplicity-free;

  2. (ii)

    \(\lbrace E_i\rbrace _{i=0}^d\) is an ordering of the primitive idempotents of A;

  3. (iii)

    \(\lbrace E^*_i\rbrace _{i=0}^d\) is an ordering of the primitive idempotents of \(A^*\);

  4. (iv)

    \({\displaystyle { E^*_iAE^*_j = {\left\{ \begin{array}{ll} 0, &{} {\text { if }\vert i-j\vert > 1}; \\ \not =0, &{}{\text { if }\vert i-j \vert = 1} \end{array}\right. } \qquad (0 \le i,j\le d)}}\);

  5. (v)

    \({\displaystyle { E_iA^*E_j = {\left\{ \begin{array}{ll} 0, &{} {\text { if }\vert i-j\vert > 1}; \\ \not =0, &{}{\text { if }\vert i-j \vert = 1} \end{array}\right. } \qquad (0 \le i,j\le d)}}\).

The Leonard system \(\Phi \) is said to be over \({\mathbb {F}}\) and have diameter d.

Leonard pairs and Leonard systems are related as follows. Let \((A; \lbrace E_i\rbrace _{i=0}^d; A^*; \lbrace E^*_i\rbrace _{i=0}^d)\) denote a Leonard system on V. Then \(A, A^*\) is a Leonard pair on V. Conversely, let \(A, A^*\) denote a Leonard pair on V. Then each of A, \(A^*\) is multiplicity-free [18, Lemma 1.3]. Moreover there exists an ordering \(\lbrace E_i\rbrace _{i=0}^d\) of the primitive idempotents of A, and there exists an ordering \(\lbrace E^*_i\rbrace _{i=0}^d\) of the primitive idempotents of \(A^*\), such that \((A; \lbrace E_i\rbrace _{i=0}^d; A^*; \lbrace E^*_i\rbrace _{i=0}^d)\) is a Leonard system on V.

Next we recall the notion of isomorphism for Leonard pairs and Leonard systems.

Definition 2.4

Let \(A, A^*\) denote a Leonard pair on V, and let \(B,B^*\) denote a Leonard pair on a vector space \(V'\). By an isomorphism of Leonard pairs from \(A, A^*\) to \(B, B^*\) we mean a vector space isomorphism \(\sigma : V \rightarrow V'\) such that \(B = \sigma A \sigma ^{-1}\) and \(B^* = \sigma A^* \sigma ^{-1}\). The Leonard pairs \(A, A^*\) and \(B,B^*\) are isomorphic whenever there exists an isomorphism of Leonard pairs from \(A, A^*\) to \(B, B^*\).

Let \(\sigma : V \rightarrow V'\) denote an isomorphism of vector spaces. For \(X \in \mathrm{End}(V)\) abbreviate \(X^\sigma = \sigma X \sigma ^{-1}\) and note that \(X^\sigma \in \mathrm{End}(V')\). The map \(\mathrm{End}(V) \rightarrow \mathrm{End}(V')\), \(X \mapsto X^\sigma \) is an isomorphism of algebras. For a Leonard system \(\Phi = (A; \lbrace E_i\rbrace _{i=0}^d; A^*; \lbrace E^*_i\rbrace _{i=0}^d)\) on V the sequence

$$\begin{aligned} \Phi ^\sigma := (A^\sigma ; \lbrace E^\sigma _i\rbrace _{i=0}^d; A^{*\sigma }; \lbrace E^{*\sigma }_i\rbrace _{i=0}^d) \end{aligned}$$

is a Leonard system on \(V'\).

Definition 2.5

Let \(\Phi \) denote a Leonard system on V, and let \(\Phi '\) denote a Leonard system on a vector space \(V'\). By an isomorphism of Leonard systems from \(\Phi \) to \(\Phi '\), we mean an isomorphism of vector spaces \(\sigma : V\rightarrow V'\) such that \(\Phi ^\sigma = \Phi '\). The Leonard systems \(\Phi \) and \(\Phi '\) are isomorphic whenever there exists an isomorphism of Leonard systems from \(\Phi \) to \(\Phi '\).

In [18, Theorem 1.9] we classified the Leonard systems up to isomorphism. Our first main goal in the present paper is to give an improved proof of this classifiction. This goal will be accomplished in Sects. 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17. The statement of the classification is given in Theorem 10.1. The proof of Theorem 10.1 will be completed in Sect. 17.

Recall the commutator notation \([r,s]= rs - sr\).

3 Pre Leonard Systems

As we start our investigation of Leonard systems, it is helpful to consider a more general object called a pre Leonard system. This object is defined as follows.

Definition 3.1

By a pre Leonard system on V, we mean a sequence

$$\begin{aligned} \Phi =(A; \lbrace E_i\rbrace _{i=0}^d; A^*; \lbrace E^*_i\rbrace _{i=0}^d) \end{aligned}$$
(3)

of elements in \(\mathrm{End}(V)\) that satisfy conditions (i–iii) in Definition 2.3.

The results in this section refer to the pre Leonard system \(\Phi \) from (3).

Definition 3.2

For \(0 \le i \le d\) let \(\theta _i\) (resp. \(\theta ^*_i\)) denote the eigenvalue of A (resp. \(A^*\)) for \(E_i\) (resp. \(E^*_i\)). We call \(\lbrace \theta _i \rbrace _{i=0}^d\) (resp. \(\lbrace \theta ^*_i \rbrace _{i=0}^d\)) the eigenvalue sequence (resp. dual eigenvalue sequence) of \(\Phi \). Let \({\mathcal {D}}\) (resp. \({\mathcal {D}}^*\)) denote the subalgebra of \(\mathrm{End}(V)\) generated by A (resp. \(A^*\)).

Definition 3.3

Define

$$\begin{aligned} a_i = \mathrm{tr} (A E^*_i), \qquad \qquad a^*_i = \mathrm{tr} (A^* E_i) \qquad \qquad (0 \le i \le d). \end{aligned}$$

We call \(\lbrace a_i \rbrace _{i=0}^d\) (resp. \(\lbrace a^*_i \rbrace _{i=0}^d\)) the diagonal sequence (resp. dual diagonal sequence) of \(\Phi \).

Lemma 3.4

We have

$$\begin{aligned}&\theta _0 + \theta _1 + \cdots + \theta _d = a_0 + a_1 + \cdots + a_d, \end{aligned}$$
(4)
$$\begin{aligned}&\theta ^*_0 + \theta ^*_1 + \cdots + \theta ^*_d = a^*_0 + a^*_1 + \cdots + a^*_d. \end{aligned}$$
(5)

Proof

To obtain (4), observe that

$$\begin{aligned} \sum _{i=0}^d \theta _i = \mathrm{tr}(A) = \mathrm{tr} \Bigl (A \sum _{i=0}^d E^*_i\Bigr ) = \sum _{i=0}^d a_i. \end{aligned}$$

The proof of (5) is similar. \(\square \)

Lemma 3.5

For \(0 \le i \le d\),

  1. (i)

    \(E^*_i A E^*_i = a_i E^*_i\);

  2. (ii)

    \(E_i A^* E_i = a^*_i E_i\).

Proof

  1. (i)

    Abbreviate \({{\mathcal {A}}}=\mathrm{End}(V)\). Since \(E^*_i\) has rank 1, the vector space \(E^*_i{\mathcal {A}} E^*_i\) is spanned by \(E^*_i\). Therefore there exists \(\alpha _i \in {\mathbb {F}}\) such that \(E^*_i A E^*_i=\alpha _i E^*_i\). In this equation take the trace of each side and use Definition 3.3 along with \(\mathrm{tr}(XY) = \mathrm{tr}(YX)\) to obtain \(a_i=\alpha _i\).

  2. (ii)

    Similar to the proof of (i) above.

\(\square \)

We have been discussing the pre Leonard system

$$\begin{aligned} \Phi = (A; \lbrace E_i\rbrace _{i=0}^d; A^*; \lbrace E^*_i\rbrace _{i=0}^d) \end{aligned}$$

on V. Each of the following is a pre Leonard system on V:

$$\begin{aligned}&\Phi ^\Downarrow := (A; \lbrace E_{d-i}\rbrace _{i=0}^d; A^*; \lbrace E^*_i\rbrace _{i=0}^d), \\&\Phi ^\downarrow := (A; \lbrace E_i\rbrace _{i=0}^d; A^*; \lbrace E^*_{d-i}\rbrace _{i=0}^d), \\&\Phi ^*:= (A^*; \lbrace E^*_i\rbrace _{i=0}^d; A; \lbrace E_{i}\rbrace _{i=0}^d). \end{aligned}$$

Proposition 3.6

With the above notation,

pre LS

Eigenvalue seq.

Dual eigenvalue seq.

Diagonal seq.

Dual diagonal seq.

\(\Phi \)

\(\lbrace \theta _i \rbrace _{i=0}^d\)

\(\lbrace \theta ^*_i \rbrace _{i=0}^d\)

\(\lbrace a_i \rbrace _{i=0}^d\)

\(\lbrace a^*_i \rbrace _{i=0}^d\)

\(\Phi ^\Downarrow \)

\(\lbrace \theta _{d-i} \rbrace _{i=0}^d\)

\(\lbrace \theta ^*_i \rbrace _{i=0}^d\)

\(\lbrace a_i \rbrace _{i=0}^d\)

\(\lbrace a^*_{d-i} \rbrace _{i=0}^d\)

\(\Phi ^\downarrow \)

\(\lbrace \theta _i \rbrace _{i=0}^d\)

\(\lbrace \theta ^*_{d-i} \rbrace _{i=0}^d\)

\(\lbrace a_{d-i} \rbrace _{i=0}^d\)

\(\lbrace a^*_i \rbrace _{i=0}^d\)

\(\Phi ^* \)

\(\lbrace \theta ^*_i \rbrace _{i=0}^d\)

\(\lbrace \theta _i \rbrace _{i=0}^d\)

\(\lbrace a^*_i \rbrace _{i=0}^d\)

\(\lbrace a_i \rbrace _{i=0}^d\)

Proof

Use Definitions 3.2, 3.3. \(\square \)

4 The Antiautomorphism \(\dagger \)

We continue to discuss the pre Leonard system \(\Phi =(A; \lbrace E_i\rbrace _{i=0}^d; A^*; \lbrace E^*_i\rbrace _{i=0}^d)\) from Definition 3.1.

Lemma 4.1

Assume that

$$\begin{aligned} E^*_iAE^*_j = {\left\{ \begin{array}{ll} 0, &{} {\text { if }\vert i-j\vert > 1}; \\ \not =0, &{}{\text { if }\vert i-j \vert = 1} \end{array}\right. } \qquad (0 \le i,j\le d). \end{aligned}$$

Then the elements

$$\begin{aligned} A^i E^*_0 A^j \qquad \qquad 0 \le i,j\le d \end{aligned}$$
(6)

form a basis for the vector space \(\mathrm{End}(V)\).

Proof

For \(0 \le i \le d\) pick \(0 \not =v_i \in E^*_iV\). So \(\lbrace v_i \rbrace _{i=0}^d \) is a basis for V. Without loss of generality, we may identify each \(X\in \mathrm{End}(V)\) with the matrix in \(\mathrm{Mat}_{d+1}({\mathbb {F}})\) that represents X with respect to \(\lbrace v_i \rbrace _{i=0}^d\). From this point of view A is irreducible tridiagonal and \(E^*_0 = \mathrm{diag}(1,0,\ldots ,0)\). Using these matrices one routinely checks that the elements (6) are linearly independent. There are \((d+1)^2\) elements listed in (6), and this is the dimension of \(\mathrm{End}(V)\). Therefore the elements (6) form a basis for \(\mathrm{End}(V)\). \(\square \)

Lemma 4.2

Under the assumption in Lemma 4.1, each of the following is a generating set for the algebra \(\mathrm{End}(V)\): (i) \(A, E^*_0\); (ii) \(A, A^*\).

Proof

  1. (i)

    By Lemma 4.1.

  2. (ii)

    By (i) above and since \(E^*_0\) is a polynomial in \(A^*\).

\(\square \)

By an automorphism of \(\mathrm{End}(V)\) we mean an algebra isomorphism \(\mathrm{End}(V)\rightarrow \mathrm{End}(V)\). By an antiautomorphism of \(\mathrm{End}(V)\) we mean a vector space isomorphism \(\zeta : \mathrm{End}(V) \rightarrow \mathrm{End}(V) \) such that \((XY)^\zeta = Y^\zeta X^\zeta \) for all \(X, Y \in \mathrm{End}(V)\).

Lemma 4.3

Under the assumption in Lemma 4.1,

  1. (i)

    there exists a unique antiautomorphism \(\dagger \) of \(\mathrm{End}(V)\) that fixes each of \(A, A^*\);

  2. (ii)

    \(\dagger \) fixes everything in \({\mathcal {D}}\) and everything in \({\mathcal {D}}^*\);

  3. (iii)

    \(\dagger \) fixes each of \(E_i, E^*_i\) for \(0 \le i \le d\);

  4. (iv)

    \((X^\dagger )^\dagger = X\) for all \(X \in \mathrm{End}(V)\).

Proof

  1. (i)

    First we show that \(\dagger \) exists. For \(0 \le i \le d\) pick \(0 \not =v_i \in E^*_iV\). So \(\lbrace v_i \rbrace _{i=0}^d\) is a basis for V. For \(X \in \mathrm{End}(V)\) let \(X^\sharp \in \mathrm{Mat}_{d+1}({\mathbb {F}})\) represent X with respect to \(\lbrace v_i \rbrace _{i=0}^d\). The map \(\sharp : \mathrm{End}(V) \rightarrow \mathrm{Mat}_{d+1}({\mathbb {F}})\), \(X \mapsto X^\sharp \) is an algebra isomorphism. Write \(B=A^\sharp \) and \(B^*=A^{*\sharp }\). The matrix B is irreducible tridiagonal and \(B^*=\mathrm{diag}(\theta ^*_0, \theta ^*_1,\ldots , \theta ^*_d)\). Define a diagonal matrix \(K \in \mathrm{Mat}_{d+1}({\mathbb {F}})\) with diagonal entries

    $$\begin{aligned} K_{ii} = \frac{B_{01} B_{12} \cdots B_{i-1,i}}{B_{10}B_{21} \cdots B_{i,i-1}} \qquad \qquad (0 \le i \le d). \end{aligned}$$

    The matrix K is invertible and \(K^{-1} B^t K = B\). Define a map \(\flat : \mathrm{Mat}_{d+1}({\mathbb {F}}) \rightarrow \mathrm{Mat}_{d+1}({\mathbb {F}}), X \mapsto K^{-1} X^t K\). The map \(\flat \) is an antiautomorphism of \(\mathrm{Mat}_{d+1}({\mathbb {F}})\) that fixes each of \(B,B^*\). The composition

    is an antiautomorphism of \(\mathrm{End}(V)\) that fixes each of \(A,A^*\). We have shown that \(\dagger \) exists. Next we show that \(\dagger \) is unique. Let \(\zeta \) denote an antiautomorphism of \(\mathrm{End}(V)\) that fixes each of \(A,A^*\). Then the composition \(\dagger \circ \zeta ^{-1}\) is an automorphism of \(\mathrm{End}(V)\) that fixes each of \(A, A^*\). Now \(\dagger \circ \zeta ^{-1} = 1\) in view of Lemma 4.2(ii). Therefore \(\dagger = \zeta \). We have shown that \(\dagger \) is unique.

  2. (ii)

    Since A (resp. \(A^*\)) generates \({\mathcal {D}}\) (resp. \({\mathcal {D}}^*\)).

  3. (iii)

    Since \(E_i \in {\mathcal {D}}\) and \(E^*_i \in {\mathcal {D}}^*\) for \(0 \le i \le d\).

  4. (iv)

    The composition \(\dagger \circ \dagger \) is an automorphism of \(\mathrm{End}(V)\) that fixes each of \(A, A^*\). Now \(\dagger \circ \dagger = 1\) in view of Lemma 4.2(ii).

\(\square \)

Proposition 4.4

Consider the following four conditions:

  1. (i)

    \(\displaystyle { E^*_iAE^*_j = {\left\{ \begin{array}{ll} 0, &{} {\text { if } i-j > 1}; \\ \not =0, &{}{\text { if } i-j = 1} \end{array}\right. } \qquad (0 \le i,j\le d); }\)

  2. (ii)

    \(\displaystyle { E^*_iAE^*_j = {\left\{ \begin{array}{ll} 0, &{} {\text { if } j-i > 1}; \\ \not =0, &{}{\text { if } j-i = 1} \end{array}\right. } \qquad (0 \le i,j\le d); }\)

  3. (iii)

    \(\displaystyle { E_iA^*E_j = {\left\{ \begin{array}{ll} 0, &{} {\text { if } i-j > 1}; \\ \not =0, &{}{\text { if } i-j = 1} \end{array}\right. } \qquad (0 \le i,j\le d); }\)

  4. (iv)

    \(\displaystyle { E_iA^*E_j = {\left\{ \begin{array}{ll} 0, &{} {\text { if } j-i > 1}; \\ \not =0, &{}{\text { if } j-i = 1} \end{array}\right. } \qquad (0 \le i,j\le d). }\)

Assume at least three of (i)–(iv) hold. Then each of (i)–(iv) holds; in other words the pre Leonard system \(\Phi \) is a Leonard system.

Proof

Interchanging \(A, A^*\) if necessary, we may assume without loss of generality that (i), (ii) hold. Now the assumption of Lemma 4.1 holds, so Lemma 4.3 applies. Consider the map \(\dagger \) from Lemma 4.3. For \(0 \le i,j\le d\) we have

$$\begin{aligned} (E_i A^*E_j)^\dagger = E_jA^*E_i. \end{aligned}$$

Therefore \(E_i A^*E_j= 0\) if and only if \(E_jA^*E_i=0\). Consequently (iii) holds if and only if (iv) holds. The result follows. \(\square \)

5 The Polynomials \(\tau _i, \eta _i, \tau ^*_i, \eta ^*_i\)

We continue to discuss the pre Leonard system \(\Phi =(A; \lbrace E_i\rbrace _{i=0}^d; A^*; \lbrace E^*_i\rbrace _{i=0}^d)\) from Definition 3.1.

Let \(\lambda \) denote an indeterminate. Let \({\mathbb {F}}[\lambda ]\) denote the algebra consisting of the polynomials in \(\lambda \) that have all coefficients in \({\mathbb {F}}\).

Definition 5.1

For \(0 \le i \le d\) define \(\tau _i, \eta _i, \tau ^*_i, \eta ^*_i \in {\mathbb {F}}[\lambda ]\) by

$$\begin{aligned}&\tau _i = (\lambda -\theta _0) (\lambda -\theta _1) \cdots (\lambda -\theta _{i-1}), \qquad \qquad \eta _i = (\lambda -\theta _d) (\lambda -\theta _{d-1}) \cdots (\lambda -\theta _{d-i+1}), \\&\tau ^*_i = (\lambda -\theta ^*_0) (\lambda -\theta ^*_1) \cdots (\lambda -\theta ^*_{i-1}), \qquad \qquad \eta ^*_i = (\lambda -\theta ^*_d) (\lambda -\theta ^*_{d-1}) \cdots (\lambda -\theta ^*_{d-i+1}). \end{aligned}$$

Each of \(\tau _i, \eta _i, \tau ^*_i, \eta ^*_i \) is monic with degree i.

We mention some results about \(\lbrace \tau _i\rbrace _{i=0}^d\) and \(\lbrace \eta _i\rbrace _{i=0}^d\); similar results apply to \(\lbrace \tau ^*_i\rbrace _{i=0}^d\) and \(\lbrace \eta ^*_i\rbrace _{i=0}^d\).

Lemma 5.2

The vectors \(\lbrace \tau _i(A)\rbrace _{i=0}^d\) form a basis for \({\mathcal {D}}\). Moreover the vectors \(\lbrace \eta _i(A)\rbrace _{i=0}^d\) form a basis for \({\mathcal {D}}\).

Proof

Since \(\lbrace A^i\rbrace _{i=0}^d\) is a basis for \({\mathcal {D}}\), and \(\tau _i\), \(\eta _i\) have degree i for \(0 \le i \le d\). \(\square \)

Lemma 5.3

For \(0 \le i \le d\),

  1. (i)

    \(\tau _i(A) = \sum _{h=i}^d \tau _i(\theta _h) E_h\);

  2. (ii)

    \(\eta _i(A) = \sum _{h=0}^{d-i} \eta _i(\theta _h) E_h\).

Proof

  1. (i)

    We have \(A=\sum _{h=0}^d \theta _h E_h\), so \(\tau _i(A)=\sum _{h=0}^d \tau _i(\theta _h) E_h\). However \(\tau _i(\theta _h)=0\) for \(0 \le h \le i-1\), so \(\tau _i(A)=\sum _{h=i}^d \tau _i(\theta _h) E_h\).

  2. (ii)

    Apply (i) above to \(\Phi ^\Downarrow \).

\(\square \)

Lemma 5.4

For \(0 \le i \le d\),

  1. (i)

    the elements \(\lbrace \tau _j(A)\rbrace _{j=0}^i\) and \(\lbrace A^j\rbrace _{j=0}^i\) have the same span;

  2. (ii)

    the elements \(\lbrace \tau _j(A)\rbrace _{j=i}^d\) and \(\lbrace E_j\rbrace _{j=i}^d\) have the same span.

Proof

  1. (i)

    The polynomial \(\tau _j\) has degree j for \(0 \le j \le d\).

  2. (ii)

    By Lemma 5.2 it suffices to show that the span of \(\lbrace \tau _j(A)\rbrace _{j=i}^d\) is contained in the span of \(\lbrace E_j\rbrace _{j=i}^d\). But this follows from Lemma 5.3(i).

\(\square \)

Lemma 5.5

For \(0 \le i \le d\),

  1. (i)

    the elements \(\lbrace \eta _j(A)\rbrace _{j=0}^i\) and \(\lbrace A^j\rbrace _{j=0}^i\) have the same span;

  2. (ii)

    the elements \(\lbrace \eta _j(A)\rbrace _{j=i}^d\) and \(\lbrace E_j\rbrace _{j=0}^{d-i}\) have the same span.

Proof

Apply Lemma 5.4 to \(\Phi ^\Downarrow \). \(\square \)

6 Normalizing Idempotents

We continue to discuss the pre Leonard system \(\Phi =(A; \lbrace E_i\rbrace _{i=0}^d; A^*; \lbrace E^*_i\rbrace _{i=0}^d)\) from Definition 3.1.

Next we explain what it means for \(E^*_0\) to be normalizing. This concept was introduced in [7, Sect. 6], although our point of view is different.

Definition 6.1

The primitive idempotent \(E^*_0\) is called normalizing whenever

$$\begin{aligned} E_i E^*_0 \not =0 \qquad \qquad (0 \le i \le d). \end{aligned}$$

In the next two lemmas we give some necessary and sufficient conditions for \(E^*_0\) to be normalizing. The proofs are routine, and omitted.

Lemma 6.2

The following (i)–(iv) are equivalent:

  1. (i)

    \(E^*_0\) is normalizing;

  2. (ii)

    \({{\mathcal {D}}}E^*_0\) has dimension \(d+1\);

  3. (iii)

    the elements \(\lbrace A^i E^*_0\rbrace _{i=0}^d\) are linearly independent;

  4. (iv)

    for \(X \in {\mathcal {D}}\), \(X E^*_0=0\) implies \(X= 0\).

Lemma 6.3

The following (i)–(iv) are equivalent:

  1. (i)

    \(E^*_0\) is normalizing;

  2. (ii)

    \(E_iV = E_iE^*_0V\) for \(0 \le i \le d\);

  3. (iii)

    \({{\mathcal {D}}}E^*_0V = V\);

  4. (iv)

    for \(0 \not =\xi \in E^*_0V\) the map \({\mathcal {D}} \rightarrow V\), \(X \mapsto X\xi \) is a bijection.

Proposition 6.4

Assume that \(E^*_0\) is normalizing. Then for \(X \in \mathrm{End}(V)\) and \(0 \le i \le d\) the following are equivalent:

  1. (i)

    \(XE_i=0\);

  2. (ii)

    \(XE_iE^*_0=0\).

Proof

Using Lemma 6.3(ii),

$$\begin{aligned} XE_i=0 \quad \Leftrightarrow \quad XE_iV=0 \quad \Leftrightarrow \quad XE_iE^*_0V=0 \quad \Leftrightarrow \quad XE_iE^*_0=0. \end{aligned}$$

\(\square \)

7 Normalizing Idempotents and Decompositions

We continue to discuss the pre Leonard system \(\Phi =(A; \lbrace E_i\rbrace _{i=0}^d; A^*; \lbrace E^*_i\rbrace _{i=0}^d)\) from Definition 3.1.

Definition 7.1

By a decomposition of V we mean a sequence \(\lbrace V_i \rbrace _{i=0}^d\) of one-dimensional subspaces of V such that the sum \(V= \sum _{i=0}^d V_i\) is direct.

Example 7.2

Each of the sequences

$$\begin{aligned} \lbrace E_iV\rbrace _{i=0}^d, \qquad \qquad \lbrace E^*_iV\rbrace _{i=0}^d \end{aligned}$$

is a decomposition of V.

Example 7.3

Let \(\lbrace v_i\rbrace _{i=0}^d\) denote a basis for V. For \(0 \le i \le d\) let \(V_i\) denote the span of \(v_i\). Then \(\lbrace V_i\rbrace _{i=0}^d\) is a decomposition of V, said to be induced by \(\lbrace v_i\rbrace _{i=0}^d\).

Lemma 7.4

Assume that \(E^*_0\) is normalizing, and define

$$\begin{aligned} U_i = \tau _i(A) E^*_0V \qquad \qquad (0 \le i \le d). \end{aligned}$$

Then the following (i)–(v) hold:

  1. (i)

    \(\lbrace U_i\rbrace _{i=0}^d \) is a decomposition of V;

  2. (ii)

    \((A-\theta _i I)U_i = U_{i+1}\)       \((0 \le i \le d-1)\);

  3. (iii)

    \((A-\theta _d I)U_d = 0\);

  4. (iv)

    \(U_0 + U_1 + \cdots + U_i = E^*_0V + A E^*_0V+ \cdots + A^i E^*_0V \qquad (0 \le i \le d);\)

  5. (v)

    \(U_i + U_{i+1} + \cdots + U_d = E_iV+ E_{i+1}V+ \cdots + E_dV \qquad (0 \le i\le d).\)       

Proof

  1. (i)

    By Lemma 6.3(iv) and since \(\lbrace \tau _i(A)\rbrace _{i=0}^d\) is a basis for \({\mathcal {D}}\).

  2. (ii)

    By Definition 5.1.

  3. (iii)

    Since \(0=\prod _{i=0}^d (A-\theta _i I)=(A-\theta _d I)\tau _d(A)\).

  4. (iv)

    By Lemma 5.4(i) and Lemma 6.3(iv).

  5. (iv)

    By Lemma 5.4(ii) and Lemma 6.3(iv).

\(\square \)

Lemma 7.5

The following (i)–(iii) are equivalent:

  1. (i)

    \(\displaystyle { E^*_iAE^*_j = {\left\{ \begin{array}{ll} 0, &{} {\text { if } i-j > 1}; \\ \not =0, &{}{\text { if } i-j = 1} \end{array}\right. } \qquad (0 \le i,j\le d); }\)

  2. (ii)

    for \(0 \le i \le d\) there exists \(f_i \in {{\mathbb {F}}}[\lambda ]\) such that \(\mathrm{deg}(f_i)=i\) and \(E^*_iV = f_i(A) E^*_0V\);

  3. (iii)

    for \(0 \le i \le d\),

    $$\begin{aligned} E^*_0V + E^*_1V+ \cdots + E^*_iV = E^*_0V + A E^*_0V + \cdots + A^i E^*_0V. \end{aligned}$$
    (7)

Assume that (i)–(iii) hold. Then \(E^*_0\) is normalizing.

Proof

\(\mathrm{(i)} \Rightarrow \mathrm{(ii)}\) For \(0 \le i \le d\) pick \(0 \not =v_i \in E^*_iV\). So \(\lbrace v_i\rbrace _{i=0}^d\) is a basis for V. Let \( B \in \mathrm{Mat}_{d+1}({\mathbb {F}})\) represent A with respect to \(\lbrace v_i \rbrace _{i=0}^d\). The entries of B satisfy

$$\begin{aligned} B_{ij} = {\left\{ \begin{array}{ll} 0, &{} {\text { if } i-j > 1}; \\ \not =0, &{}{\text { if } i-j = 1} \end{array}\right. } \qquad (0 \le i,j\le d). \end{aligned}$$

Define polynomials \(\lbrace f_i \rbrace _{i=0}^d\) in \({\mathbb {F}}[\lambda ]\) by \(f_0=1\) and

$$\begin{aligned} \lambda f_j = \sum _{i=0}^{j+1} B_{ij} f_i \qquad \qquad (0 \le j\le d-1). \end{aligned}$$

For \(0 \le i \le d\) the polynomial \(f_i\) has degree i. Also \(v_i = f_i(A)v_0\), so \(E^*_iV = f_i(A)E^*_0V\).

\(\mathrm{(ii)} \Rightarrow \mathrm{(iii)}\) The polynomial \(f_j\) has degree at most i for \(0 \le j\le i\), so

$$\begin{aligned} E^*_0V + E^*_1V+ \cdots + E^*_iV \subseteq E^*_0V + A E^*_0V + \cdots + A^i E^*_0V. \end{aligned}$$

In this inclusion, the left-hand side has dimension \(i+1\) and the right-hand side has dimension at most \(i+1\). Therefore the inclusion holds with equality.

\(\mathrm{(iii)} \Rightarrow \mathrm{(i)}\) For \(0 \le i \le d\) let \(V_i\) denote the common value in (7). Observe that

$$\begin{aligned} E^*_i V_j = {\left\{ \begin{array}{ll} 0, &{} {\text { if } i > j}; \\ \not =0, &{}{\text { if } i\le j} \end{array}\right. } \qquad (0 \le i,j\le d). \end{aligned}$$

Also observe that

$$\begin{aligned} V_{j+1} = V_j + AV_j \qquad \qquad (0 \le j \le d-1). \end{aligned}$$

Now for \(0 \le i,j\le d\) we check the conditions in (i). First assume that \(i-j>1\). Then

$$\begin{aligned} E^*_iAE^*_jV \subseteq E^*_iAV_j \subseteq E^*_i V_{j+1} = 0, \end{aligned}$$

so \(E^*_iAE^*_j = 0\). Next assume that \(i-j=1\). To show that \(E^*_iAE^*_j\not =0\), we suppose \(E^*_iAE^*_j=0\) and get a contradiction. For \(0 \le h \le i-1\) we have \(E^*_iA E^*_h = 0\), so \(E^*_iA E^*_hV = 0\). Therefore \(E^*_i A V_{i-1} = 0\). We also have \(E^*_iV_{i-1}=0\), so \(E^*_iV_i = E^*_i(V_{i-1} + A V_{i-1}) = 0\), for a contradiction. Therefore \(E^*_iAE^*_j\not =0\).

Assume that (i)–(iii) hold. Setting \(i=d\) in (iii) we obtain \(V={{\mathcal {D}}} E^*_0V\). Consequently \(E^*_0\) is normalizing by Lemma 6.3(i),(iii). \(\square \)

Proposition 7.6

The following (i)–(iii) are equivalent:

  1. (i)

    Both

    $$\begin{aligned} E^*_iAE^*_j&= {\left\{ \begin{array}{ll} 0, &{} {\text { if } i-j > 1}; \\ \not =0, &{}{\text { if }i-j = 1} \end{array}\right. } \qquad (0 \le i,j\le d), \end{aligned}$$
    (8)
    $$\begin{aligned} E_i A^*E_j&= 0 \quad \text { if }j-i> 1 \qquad (0 \le i,j \le d). \end{aligned}$$
    (9)
  2. (ii)

    There exists a decomposition \(\lbrace U_i \rbrace _{i=0}^d\) of V such that

    $$\begin{aligned}&(A-\theta _i I) U_i = U_{i+1} \qquad (0 \le i \le d-1), \qquad \quad (A-\theta _d I)U_d=0, \end{aligned}$$
    (10)
    $$\begin{aligned}&(A^*-\theta ^*_i I)U_i \subseteq U_{i-1} \qquad (1 \le i \le d), \qquad \qquad (A^* - \theta ^*_0 I) U_0=0. \end{aligned}$$
    (11)
  3. (iii)

    There exist scalars \(\lbrace \varphi _i\rbrace _{i=1}^d\) in \({\mathbb {F}}\) and a basis for V with respect to which

    $$\begin{aligned} A: \quad \left( \begin{array}{cccccc} \theta _0 &{} &{} &{} &{} &{} \mathbf{0} \\ 1 &{} \theta _1 &{} &{} &{} &{} \\ &{} 1 &{} \theta _2 &{} &{} &{} \\ &{}&{} \cdot &{} \cdot &{}&{} \\ &{} &{} &{} \cdot &{} \cdot &{} \\ \mathbf{0} &{} &{} &{} &{} 1 &{} \theta _d \end{array} \right) , \qquad \quad A^*: \quad \left( \begin{array}{cccccc} \theta ^*_0 &{} \varphi _1 &{} &{} &{} &{} \mathbf{0} \\ &{} \theta ^*_1 &{} \varphi _2 &{} &{} &{} \\ &{} &{} \theta ^*_2 &{} \cdot &{} &{} \\ &{}&{} &{} \cdot &{} \cdot &{} \\ &{} &{} &{} &{} \cdot &{} \varphi _d \\ \mathbf{0} &{} &{} &{} &{} &{} \theta ^*_d \end{array} \right) . \end{aligned}$$
    (12)

Assume that (i)–(iii) hold. Then \(E^*_0\) is normalizing and \(U_i= \tau _i(A)E^*_0V\) for \(0 \le i \le d\). The basis for V from (iii) is \(\lbrace \tau _i(A)\xi \rbrace _{i=0}^d\), where \(0 \not =\xi \in E^*_0V\). This basis induces the decomposition \(\lbrace U_i \rbrace _{i=0}^d\). The sequence \(\lbrace \varphi _i\rbrace _{i=1}^d\) is unique.

Proof

\(\mathrm{(i)}\Rightarrow \mathrm{(ii)} \) The element \(E^*_0\) is normalizing by Lemma 7.5, so Lemma 7.4 applies. Consider the decomposition \(\lbrace U_i \rbrace _{i=0}^d\) of V from Lemma 7.4. This decomposition satisfies (10) by Lemma 7.4(ii),(iii). We now show (11). By Lemma 7.4(iv) and Lemma 7.5(iii),

$$\begin{aligned} U_0 + \cdots + U_i = E^*_0V+\cdots + E^*_iV \qquad \qquad (0 \le i \le d). \end{aligned}$$

For \(0 \le i \le d\),

$$\begin{aligned} (A^*-\theta ^*_i I) U_i&\subseteq (A^*-\theta ^*_iI)(U_0 + \cdots + U_i) \\&= (A^*-\theta ^*_iI)(E^*_0V+\cdots + E^*_iV ) \\&= E^*_0V+\cdots + E^*_{i-1}V \\&= U_0+\cdots + U_{i-1}. \end{aligned}$$

Also, using Lemma 7.4(v) and (9),

$$\begin{aligned} (A^*-\theta ^*_i I)U_i&\subseteq (A^*-\theta ^*_iI)(U_i + \cdots + U_d) \\&= (A^*-\theta ^*_iI)(E_iV + \cdots + E_dV) \\&\subseteq E_{i-1}V + \cdots + E_dV \\&= U_{i-1} + \cdots + U_d. \end{aligned}$$

The above comments imply (11).

\(\mathrm{(ii)}\Rightarrow \mathrm{(i)} \) From (11) we obtain

$$\begin{aligned} U_0 + \cdots + U_i = E^*_0V+\cdots + E^*_iV \qquad \qquad (0 \le i \le d). \end{aligned}$$
(13)

In particular \(U_0=E^*_0V\). By this and (10) we obtain \(U_i=\tau _i(A)E^*_0V\) for \(0 \le i \le d\). Consequently \(V={{\mathcal {D}}}E^*_0V\), so \(E^*_0\) is normalizing by Lemma 6.3(i),(iii). We show (8). Combining Lemma 7.4(iv) and (13) we obtain Lemma 7.5(iii). This gives Lemma 7.5(i), which is (8). Next we show (9). Let ij be given with \(j-i>1\). Using Lemma 7.4(v) and (11),

$$\begin{aligned} E_iA^*E_jV&\subseteq E_iA^*(E_jV+ \cdots + E_dV) \\&= E_iA^*(U_j+ \cdots + U_d) \\&\subseteq E_i(U_{j-1}+ \cdots + U_d) \\&= E_i(E_{j-1}V+ \cdots + E_dV) \\&=0. \end{aligned}$$

Therefore \(E_i A^*E_j=0\). We have shown (9).

\(\mathrm{(ii)}\Leftrightarrow \mathrm{(iii)} \) Assertion (iii) is a reformulation of (ii) in terms of matrices.

Now assume that (i)–(iii) hold. We mentioned in the proof of \(\mathrm{(ii)}\Rightarrow \mathrm{(i)} \) that \(E^*_0\) is normalizing and \(U_i = \tau _i(A)E^*_0V\) for \(0 \le i \le d\). Let \(\lbrace u_i\rbrace _{i=0}^d\) denote the basis for V from (iii), and define \(\xi = u_0\). From the matrix representing \(A^*\) in (12), we see that \(\xi \) is an eigenvector for \(A^*\) with eigenvalue \(\theta ^*_0\). So \(\xi \in E^*_0V\). From the matrix representing A in (12), we obtain \((A-\theta _i I)u_i =u_{i+1}\) for \(0 \le i \le d-1\). Consequently \(u_i = \tau _i(A)\xi \) for \(0 \le i \le d\). For \(0 \le i \le d\) we have \(u_i = \tau _i(A)\xi \in \tau _i(A)E^*_0V=U_i\). So the basis \(\lbrace u_i \rbrace _{i=0}^d\) induces the decomposition \(\lbrace U_i \rbrace _{i=0}^d\). The sequence \(\lbrace \varphi _i\rbrace _{i=1}^d\) is unique since the vector \(\xi \) is unique up to multiplication by a nonzero scalar. \(\square \)

Lemma 7.7

Assume that the equivalent conditions (i)–(iii) hold in Proposition 7.6. Then for \(1 \le i \le d\) the following (i)–(iii) are equivalent:

  1. (i)

    \(E_{i-1}A^*E_i \not =0\);

  2. (ii)

    \((A^*-\theta ^*_i I) U_i =U_{i-1}\);

  3. (iii)

    \(\varphi _i \not =0\).

Proof

\(\mathrm{(i)} \Rightarrow \mathrm{(ii)}\) We assume that \((A^*-\theta ^*_i I) U_i \not =U_{i-1}\) and get a contradiction. We have \((A^*-\theta ^*_i I) U_i =0\) since \((A^*-\theta ^*_i I) U_i \subseteq U_{i-1}\) and \(U_{i-1}\) has dimension one. Using Lemma 7.4(v),

$$\begin{aligned} E_{i-1} A^*E_i V&\subseteq E_{i-1}A^*(E_iV+\cdots + E_dV)\\&= E_{i-1}A^*(U_i+\cdots + U_d)\\&\subseteq E_{i-1}(U_i+\cdots + U_d)\\&= E_{i-1}(E_iV+\cdots + E_dV)\\&=0. \end{aligned}$$

Therefore \(E_{i-1}A^*E_i=0\) for a contradiction. We have shown \((A^*-\theta ^*_i I) U_i =U_{i-1}\).

\(\mathrm{(ii)} \Rightarrow \mathrm{(i)}\) We assume that \(E_{i-1} A^*E_i=0\) and get a contradiction. Using Lemma 7.4(v),

$$\begin{aligned} U_{i-1}&= (A^*-\theta ^*_i I ) U_i \\&\subseteq (A^*-\theta ^*_i I ) (U_i+ \cdots + U_d) \\&= (A^*-\theta ^*_i I ) (E_iV+ \cdots + E_dV) \\&\subseteq E_iV+ \cdots + E_dV \\&= U_i+ \cdots + U_d. \end{aligned}$$

This contradicts the fact that \(\lbrace U_i\rbrace _{i=0}^d\) is a decomposition. We have shown \(E_{i-1} A^*E_i\not =0\).

\(\mathrm{(ii)} \Leftrightarrow \mathrm{(iii)}\) Use the matrix representation of \(A^*\) from (12). \(\square \)

8 A Result About Wrap-Around

We continue to discuss the pre Leonard system \(\Phi =(A; \lbrace E_i\rbrace _{i=0}^d; A^*; \lbrace E^*_i\rbrace _{i=0}^d)\) from Definition 3.1.

Throughout this section we assume that \(\Phi \) satisfies the equivalent conditions (i–iii) in Proposition 7.6. We will obtain a useful result involving the scalars \(\lbrace \varphi _i \rbrace _{i=1}^d\) from Proposition 7.6(iii); this result is sometimes called the wrap-around result.

Recall the parameters \(\lbrace a_i\rbrace _{i=0}^d\), \(\lbrace a^*_i\rbrace _{i=0}^d \) from Definition 3.3. We next compute these parameters in terms of \(\lbrace \theta _i\rbrace _{i=0}^d\), \(\lbrace \theta ^*_i\rbrace _{i=0}^d\), \(\lbrace \varphi _i\rbrace _{i=1}^d\). First assume that \(d=0\). Then \(A=\theta _0 I\) and \(A^*=\theta ^*_0 I\), so \(a_0=\theta _0\) and \(a^*_0 = \theta ^*_0\).

Lemma 8.1

(See [18, Lemma 5.1]). For \(d\ge 1\) we have

$$\begin{aligned} a_0&= \theta _0 + \frac{\varphi _1}{\theta ^*_0-\theta ^*_1}, \\ a_i&= \theta _i + \frac{\varphi _i}{\theta ^*_i-\theta ^*_{i-1}} +\frac{\varphi _{i+1}}{\theta ^*_i-\theta ^*_{i+1}} \qquad \qquad (1 \le i \le d-1), \\ a_d&= \theta _d + \frac{\varphi _d}{\theta ^*_d-\theta ^*_{d-1}} \end{aligned}$$

and

$$\begin{aligned} a^*_0&= \theta ^*_0 + \frac{\varphi _1}{\theta _0-\theta _1}, \\ a^*_i&= \theta ^*_{i} + \frac{\varphi _{i}}{\theta _i-\theta _{i-1}} +\frac{\varphi _{i+1}}{\theta _i-\theta _{i+1}} \qquad \qquad (1 \le i \le d-1), \\ a^*_d&= \theta ^*_d + \frac{\varphi _d}{\theta _d-\theta _{d-1}}. \end{aligned}$$

Proof

Concerning \(\lbrace a_i\rbrace _{i=0}^d\), define

$$\begin{aligned} x_0&= \theta _0 + \frac{\varphi _1}{\theta ^*_0-\theta ^*_1}, \\ x_i&= \theta _i + \frac{\varphi _i}{\theta ^*_i-\theta ^*_{i-1}} +\frac{\varphi _{i+1}}{\theta ^*_i-\theta ^*_{i+1}} \qquad \qquad (1 \le i \le d-1), \\ x_d&= \theta _d + \frac{\varphi _d}{\theta ^*_d-\theta ^*_{d-1}}. \end{aligned}$$

We show that \(a_i=x_i\) for \(0 \le i \le d\). Since \(\lbrace \theta ^*_i\rbrace _{i=0}^d\) are mutually distinct, it suffices to show that \(0 = \sum _{i=0}^d (x_i-a_i)\theta ^{*r}_i\) for \(0 \le r \le d\). Let r be given. We compute the trace of \(A A^{*r}\) in two ways. On one hand, \(A^*= \sum _{i=0}^d \theta ^*_i E^*_i\) so \(A^{*r}= \sum _{i=0}^d \theta ^{*r}_i E^*_i\). By this and Definition 3.3,

$$\begin{aligned} \mathrm{tr} (A A^{*r}) = \sum _{i=0}^d a_i \theta ^{*r}_i. \end{aligned}$$
(14)

On the other hand, consider the matrix representations of A and \(A^*\) from (12). Using these matrices we compute the trace of \(A A^{*r}\) as the sum of the diagonal entries. A brief calculation yields

$$\begin{aligned} \mathrm{tr} (A A^{*r}) = \sum _{i=0}^d \theta _i \theta ^{*r}_i + \sum _{i=1}^d \varphi _i \frac{\theta ^{*r}_{i-1} - \theta ^{*r}_i}{ \theta ^*_{i-1}-\theta ^*_i}. \end{aligned}$$
(15)

Comparing (14), (15) we get an equation that becomes \(0 = \sum _{i=0}^d (x_i-a_i)\theta ^{*r}_i\) after rearranging the terms. We have shown that \(a_i=x_i\) for \(0 \le i \le d\). Our assertions concerning \(\lbrace a^*_i\rbrace _{i=0}^d\) are similarly obtained, by computing in two ways the trace of \(A^* A^r\) for \(0 \le r \le d\). \(\square \)

Lemma 8.2

(See [18, Lemma 5.2]). For \(1 \le i \le d\) the scalar \(\varphi _i\) is equal to each of the following four expressions:

$$\begin{aligned}&(\theta ^*_i-\theta ^*_{i-1})\sum _{h=0}^{i-1} (\theta _h-a_h), \qquad \qquad (\theta ^*_{i-1}-\theta ^*_i)\sum _{h=i}^{d} (\theta _h-a_h), \\&(\theta _i-\theta _{i-1})\sum _{h=0}^{i-1} (\theta ^*_h-a^*_h), \qquad \qquad (\theta _{i-1}-\theta _i)\sum _{h=i}^{d} (\theta ^*_h-a^*_h). \end{aligned}$$

Proof

Use Lemmas 3.4, 8.1. \(\square \)

Definition 8.3

Define

$$\begin{aligned} \vartheta _i = \varphi _i - (\theta ^*_i-\theta ^*_0)(\theta _{i-1}-\theta _d) \qquad \qquad (1 \le i \le d) \end{aligned}$$

and \(\vartheta _0=0\), \(\vartheta _{d+1}=0\).

Proposition 8.4

(wrap-around) Assume \(d\ge 2\). Then

$$\begin{aligned} \sum _{i=0}^{d-2} E_d A^* E_i E^*_0 (\theta _i - \theta _{d-1}) = E_d E^*_0 (\vartheta _1 -\vartheta _d). \end{aligned}$$

Proof

In the equation \(I = \sum _{i=0}^d E^*_i\), multiply each term on the right by \(AE^*_0\). Simplify the result using \(E^*_0AE^*_0= a_0 E^*_0\) and \(E^*_i A E^*_0 = 0 \) \((2 \le i \le d)\) to obtain

$$\begin{aligned} A E^*_0 = a_0 E^*_0 + E^*_1 A E^*_0. \end{aligned}$$
(16)

In (16), multiply each term on the left by \(A^*\) and simplify to get

$$\begin{aligned} A^*A E^*_0 = a_0 \theta ^*_0 E^*_0 + \theta ^*_1 E^*_1 A E^*_0. \end{aligned}$$
(17)

In the equation \(I = \sum _{i=0}^d E_i\), multiply each term on the left by \(E_d A^*\). Simplify the result using \(E_d A^* E_d = a^*_d E_d\) to obtain

$$\begin{aligned} E_d A^* = a^*_d E_d + \sum _{i=0}^{d-1} E_d A^* E_i. \end{aligned}$$
(18)

In (18), multiply each term on the right by A and simplify to obtain

$$\begin{aligned} E_d A^*A = a^*_d \theta _d E_d + \sum _{i=0}^{d-1} \theta _i E_d A^* E_i. \end{aligned}$$
(19)

We now compute \(\theta ^*_1 E_d\) times (16) minus \(E_d\) times (17) minus (18) times \(\theta _{d-1} E^*_0\) plus (19) times \( E^*_0\). The result is

$$\begin{aligned}&E_d E^*_0 \biggl ( (\theta ^*_0 - \theta ^*_1) a_0+ (\theta _{d-1} - \theta _d) a^*_d + \theta _d \theta ^*_1 - \theta _{d-1} \theta ^*_0 \biggr ) = \sum _{i=0}^{d-2} E_d A^* E_i E^*_0 (\theta _i - \theta _{d-1}). \end{aligned}$$

In the above equation, consider the coefficient of \(E_d E^*_0\). Evaluate this coefficient using

$$\begin{aligned}&a_0 = \theta _0 + \frac{\varphi _1}{\theta ^*_0-\theta ^*_1}, \qquad \qquad \varphi _1 = \vartheta _1 + (\theta ^*_1 - \theta ^*_0)(\theta _0 - \theta _d), \\&a^*_d = \theta ^*_d + \frac{\varphi _d}{\theta _d-\theta _{d-1}}, \qquad \qquad \varphi _d = \vartheta _d + (\theta ^*_d - \theta ^*_0)(\theta _{d-1} - \theta _d) \end{aligned}$$

to find that this coefficient is \(\vartheta _1 -\vartheta _d\). \(\square \)

For the sake of completeness, we mention a second version of Proposition 8.4. We do not use this second version, so we will not dwell on the proof.

Lemma 8.5

Assume \(d\ge 2\). Then

$$\begin{aligned} \sum _{i=2}^{d} E^*_0 A E^*_i E_d (\theta ^*_1 - \theta ^*_i) = E^*_0 E_d (\vartheta _1 -\vartheta _d). \end{aligned}$$

Proof

Similar to the proof of Proposition 8.4. \(\square \)

9 The Parameter Array of a Leonard System

In this section we consider a Leonard system \(\Phi = (A; \lbrace E_i\rbrace _{i=0}^d; A^*; \lbrace E^*_i\rbrace _{i=0}^d)\) on V. Note that \(\Phi \) satisfies the equivalent conditions (i)–(iii) in Proposition 7.6.

Definition 9.1

By the first split sequence for \(\Phi \) we mean the sequence \(\lbrace \varphi _i\rbrace _{i=1}^d\) from Proposition 7.6(iii). Let \(\lbrace \phi _i \rbrace _{i=1}^d\) denote the first split sequence for \(\Phi ^\Downarrow \). We call \(\lbrace \phi _i \rbrace _{i=1}^d\) the second split sequence for \(\Phi \).

By Lemma 7.7 and the construction, \(\varphi _i\) and \(\phi _i\) are nonzero for \(1 \le i \le d\).

Lemma 9.2

There exists a basis for V with respect to which

$$\begin{aligned} A: \quad \left( \begin{array}{cccccc} \theta _d &{} &{} &{} &{} &{} \mathbf{0} \\ 1 &{} \theta _{d-1} &{} &{} &{} &{} \\ &{} 1 &{} \theta _{d-2} &{} &{} &{} \\ &{}&{} \cdot &{} \cdot &{}&{} \\ &{} &{} &{} \cdot &{} \cdot &{} \\ \mathbf{0} &{} &{} &{} &{} 1 &{} \theta _0 \end{array} \right) , \qquad \qquad A^*:\quad \left( \begin{array}{cccccc} \theta ^*_0 &{} \phi _1 &{} &{} &{} &{} \mathbf{0} \\ &{} \theta ^*_1 &{} \phi _2 &{} &{} &{} \\ &{} &{} \theta ^*_2 &{} \cdot &{} &{} \\ &{}&{} &{} \cdot &{} \cdot &{} \\ &{} &{} &{} &{} \cdot &{} \phi _d \\ \mathbf{0} &{} &{} &{} &{} &{} \theta ^*_d \end{array} \right) . \end{aligned}$$
(20)

Proof

Apply Proposition 7.6(iii) to \(\Phi ^\Downarrow \). \(\square \)

Lemma 9.3

For a Leonard system \(\Phi '\) over \({\mathbb {F}}\), the following are equivalent:

  1. (i)

    \(\Phi , \Phi '\) are isomorphic;

  2. (ii)

    \(\Phi , \Phi '\) have the same eigenvalue sequence, dual eigenvalue sequence, and first split sequence;

  3. (iii)

    \(\Phi , \Phi '\) have the same eigenvalue sequence, dual eigenvalue sequence, and second split sequence.

Proof

\(\mathrm{(i)} \Leftrightarrow \mathrm{(ii)}\) By Proposition 7.6(iii).

\(\mathrm{(i)} \Leftrightarrow \mathrm{(iii)}\) By Lemma 9.2. \(\square \)

In Lemma 8.1 we gave some formulas for \(\lbrace a_i\rbrace _{i=0}^d\), \(\lbrace a^*_i\rbrace _{i=0}^d\) that involved \(\lbrace \varphi _i\rbrace _{i=1}^d\). Next we give some similar formulas that involve \(\lbrace \phi _i\rbrace _{i=1}^d\).

Lemma 9.4

For \(d\ge 1\) we have

$$\begin{aligned} a_0&= \theta _d + \frac{\phi _1}{\theta ^*_0-\theta ^*_1}, \\ a_i&= \theta _{d-i} + \frac{\phi _i}{\theta ^*_i-\theta ^*_{i-1}} +\frac{\phi _{i+1}}{\theta ^*_i-\theta ^*_{i+1}} \qquad \qquad (1 \le i \le d-1), \\ a_d&= \theta _0 + \frac{\phi _d}{\theta ^*_d-\theta ^*_{d-1}} \end{aligned}$$

and

$$\begin{aligned} a^*_0&= \theta ^*_d + \frac{\phi _d}{\theta _0-\theta _{1}}, \\ a^*_{i}&= \theta ^*_{d-i} + \frac{\phi _{d-i+1}}{\theta _{i}-\theta _{i-1}} + \frac{\phi _{d-i}}{\theta _{i}-\theta _{i+1}} \qquad \qquad (1 \le i \le d-1), \\ a^*_d&= \theta ^*_0 + \frac{\phi _1}{\theta _d-\theta _{d-1}}, \end{aligned}$$

Proof

Recall that \(\lbrace \phi _i \rbrace _{i=1}^d\) is the first split sequence for \(\Phi ^\Downarrow \). Apply Lemma 8.1 to \(\Phi ^\Downarrow \) and use the data for \(\Phi ^\Downarrow \) in Proposition 3.6. \(\square \)

Lemma 9.5

(See [18, Lemma 6.4]). For \(1 \le i \le d\) the scalar \(\phi _i\) is equal to each of the following four expressions:

$$\begin{aligned}&(\theta ^*_i-\theta ^*_{i-1})\sum _{h=0}^{i-1} (\theta _{d-h}-a_h), \qquad \qquad (\theta ^*_{i-1}-\theta ^*_i)\sum _{h=i}^{d} (\theta _{d-h}-a_h), \\&(\theta _{d-i}-\theta _{d-i+1})\sum _{h=0}^{i-1} (\theta ^*_h-a^*_{d-h}), \qquad \qquad (\theta _{d-i+1}-\theta _{d-i})\sum _{h=i}^{d} (\theta ^*_h-a^*_{d-h}). \end{aligned}$$

Proof

Apply Lemma 8.2 to \(\Phi ^\Downarrow \) and use the data for \(\Phi ^\Downarrow \) in Proposition 3.6. \(\square \)

Definition 9.6

(See [19, Definition 10.1]). By the parameter array of \(\Phi \) we mean the sequence

$$\begin{aligned} \bigl ( \lbrace \theta _i \rbrace _{i=0}^d; \lbrace \theta ^*_i \rbrace _{i=0}^d; \lbrace \varphi _i \rbrace _{i=1}^d; \lbrace \phi _i \rbrace _{i=1}^d \bigr ) \end{aligned}$$

where we recall that \(\lbrace \theta _i \rbrace _{i=0}^d\) is the eigenvalue sequence of \(\Phi \), \(\lbrace \theta ^*_i \rbrace _{i=0}^d\) is the dual eigenvalue sequence of \(\Phi \), \(\lbrace \varphi _i \rbrace _{i=1}^d\) is the first split sequence of \(\Phi \), and \(\lbrace \phi _i \rbrace _{i=1}^d\) is the second split sequence of \(\Phi \).

Lemma 9.7

(See [18, Theorem 1.11]). The parameter arrays of

$$\begin{aligned} \Phi , \qquad \Phi ^\Downarrow , \qquad \Phi ^\downarrow , \qquad \Phi ^* \end{aligned}$$

are related as follows.

LS

Parameter array

\(\Phi \)

\(\bigl ( \lbrace \theta _i \rbrace _{i=0}^d; \lbrace \theta ^*_i \rbrace _{i=0}^d; \lbrace \varphi _i \rbrace _{i=1}^d; \lbrace \phi _i \rbrace _{i=1}^d \bigr ) \)

\(\Phi ^\Downarrow \)

\( \bigl ( \lbrace \theta _{d-i} \rbrace _{i=0}^d; \lbrace \theta ^*_i \rbrace _{i=0}^d; \lbrace \phi _i \rbrace _{i=1}^d; \lbrace \varphi _i \rbrace _{i=1}^d \bigr ) \)

\(\Phi ^\downarrow \)

\( \bigl ( \lbrace \theta _{i} \rbrace _{i=0}^d; \lbrace \theta ^*_{d-i} \rbrace _{i=0}^d; \lbrace \phi _{d-i+1} \rbrace _{i=1}^d; \lbrace \varphi _{d-i+1} \rbrace _{i=1}^d \bigr )\)

\(\Phi ^*\)

\( \bigl ( \lbrace \theta ^*_{i} \rbrace _{i=0}^d; \lbrace \theta _{i} \rbrace _{i=0}^d; \lbrace \varphi _{i} \rbrace _{i=1}^d; \lbrace \phi _{d-i+1} \rbrace _{i=1}^d \bigr ) \)

Proof

Use Proposition 3.6 and Lemmas 8.2, 9.5. \(\square \)

We mention a variation on Lemma 9.3.

Proposition 9.8

Two Leonard systems over \({\mathbb {F}}\) are isomorphic if and only if they have the same parameter array.

Proof

By Lemma 9.3 and Definition 9.6. \(\square \)

10 Statement of the Leonard System Classification

In the following theorem we classify up to isomorphism the Leonard systems over \({\mathbb {F}}\).

Theorem 10.1

(See [18, Theorem 1.9]). Consider a sequence

$$\begin{aligned} \bigl ( \lbrace \theta _i \rbrace _{i=0}^d; \lbrace \theta ^*_i \rbrace _{i=0}^d; \lbrace \varphi _i \rbrace _{i=1}^d; \lbrace \phi _i \rbrace _{i=1}^d \bigr ) \end{aligned}$$
(21)

of scalars in \({\mathbb {F}}\). Then there exists a Leonard system \(\Phi \) over \({\mathbb {F}}\) with parameter array (21) if and only if the following conditions (PA1)–(PA5) hold:

(PA1):

\( \theta _i\not =\theta _j,\quad \theta ^*_i\not =\theta ^*_j \quad \) if \(\;\;i\not =j,\qquad \qquad (0 \le i,j\le d)\);

(PA2):

\( \varphi _i \not =0, \quad \phi _i\not =0 \qquad \qquad (1 \le i \le d)\);

(PA3):

\( {\displaystyle { \varphi _i = \phi _1 \sum _{h=0}^{i-1} \frac{\theta _h-\theta _{d-h}}{\theta _0-\theta _d} +(\theta ^*_i-\theta ^*_0)(\theta _{i-1}-\theta _d) \qquad \;\;(1 \le i \le d)}}\);

(PA4):

\( {\displaystyle { \phi _i = \varphi _1 \sum _{h=0}^{i-1} \frac{\theta _h-\theta _{d-h}}{\theta _0-\theta _d} +(\theta ^*_i-\theta ^*_0)(\theta _{d-i+1}-\theta _0) \qquad (1 \le i \le d)}}\);

(PA5):

the scalars

$$\begin{aligned} \frac{\theta _{i-2}-\theta _{i+1}}{\theta _{i-1}-\theta _i},\qquad \qquad \frac{\theta ^*_{i-2}-\theta ^*_{i+1}}{\theta ^*_{i-1}-\theta ^*_i} \qquad \qquad \end{aligned}$$
(22)

are equal and independent of i for \(2\le i \le d-1\).

Moreover, if \(\Phi \) exists then \(\Phi \) is unique up to isomorphism of Leonard systems.

The proof of Theorem 10.1 will be completed in Sect. 17.

Definition 10.2

By a parameter array of diameter d over \({\mathbb {F}}\), we mean a sequence (21) of scalars in \({\mathbb {F}}\) that satisfy (PA1)–(PA5).

Theorem 10.1 gives a bijection between the following two sets:

  1. (i)

    the parameter arrays over \({\mathbb {F}}\) that have diameter d;

  2. (ii)

    the isomorphism classes of Leonard systems over \({\mathbb {F}}\) that have diameter d.

We have a comment.

Lemma 10.3

For \(d\ge 1\), a parameter array \(\bigl ( \lbrace \theta _i \rbrace _{i=0}^d; \lbrace \theta ^*_i \rbrace _{i=0}^d; \lbrace \varphi _i \rbrace _{i=1}^d; \lbrace \phi _i \rbrace _{i=1}^d \bigr )\) is uniquely determined by \(\varphi _1\), \(\lbrace \theta _i \rbrace _{i=0}^d\), \(\lbrace \theta ^*_i \rbrace _{i=0}^d\).

Proof

By the nature of the equations (PA3), (PA4). \(\square \)

11 Recurrent Sequences

Throughout this section let \(\lbrace \theta _i\rbrace _{i=0}^d\) denote scalars in \({\mathbb {F}}\).

Definition 11.1

(See [18, Definition 8.2]). Let \(\beta , \gamma , \varrho \) denote scalars in \({\mathbb {F}}\).

  1. (i)

    The sequence \(\lbrace \theta _i\rbrace _{i=0}^d\) is said to be recurrent whenever \(\theta _{i-1}\not =\theta _i\) for \(2 \le i \le d-1\), and

    $$\begin{aligned} \frac{\theta _{i-2}-\theta _{i+1}}{\theta _{i-1}-\theta _i} \end{aligned}$$
    (23)

    is independent of i for \(2 \le i \le d-1\).

  2. (ii)

    The sequence \(\lbrace \theta _i\rbrace _{i=0}^d\) is said to be \(\beta \)-recurrent whenever

    $$\begin{aligned} \theta _{i-2}\,-\,(\beta +1)\theta _{i-1}\,+\,(\beta +1)\theta _i \,-\,\theta _{i+1} \end{aligned}$$
    (24)

    is zero for \(2 \le i \le d-1\).

  3. (iii)

    The sequence \(\lbrace \theta _i\rbrace _{i=0}^d\) is said to be \((\beta ,\gamma )\)-recurrent whenever

    $$\begin{aligned} \theta _{i-1}\,-\,\beta \theta _i\,+\,\theta _{i+1}=\gamma \end{aligned}$$
    (25)

    for \(1 \le i \le d-1\).

  4. (iv)

    The sequence \(\lbrace \theta _i\rbrace _{i=0}^d\) is said to be \((\beta ,\gamma ,\varrho )\)-recurrent whenever

    $$\begin{aligned} \theta ^2_{i-1}-\beta \theta _{i-1}\theta _i+\theta ^2_i -\gamma (\theta _{i-1} +\theta _i)=\varrho \end{aligned}$$
    (26)

    for \(1 \le i \le d\).

Lemma 11.2

The following are equivalent:

  1. (i)

    the sequence \(\lbrace \theta _i\rbrace _{i=0}^d\) is recurrent;

  2. (ii)

    the scalars \(\theta _{i-1}\not =\theta _i\) for \(2 \le i \le d-1\), and there exists \(\beta \in {\mathbb {F}}\) such that \(\lbrace \theta _i\rbrace _{i=0}^d\) is \(\beta \)-recurrent.

Suppose (i), (ii) hold, and that \(d\ge 3\). Then the common value of (23) is equal to \(\beta +1\).

Proof

Routine. \(\square \)

Lemma 11.3

For \(\beta \in {\mathbb {F}}\) the following are equivalent:

  1. (i)

    the sequence \(\lbrace \theta _i\rbrace _{i=0}^d\) is \(\beta \)-recurrent;

  2. (ii)

    there exists \(\gamma \in {\mathbb {F}}\) such that \(\lbrace \theta _i\rbrace _{i=0}^d\) is \((\beta ,\gamma )\)-recurrent.

Proof

\(\mathrm{(i)}\Rightarrow \mathrm{(ii)} \) For \(2\le i \le d-1\), the expression (24) is zero by assumption, so

$$\begin{aligned} \theta _{i-2}-\beta \theta _{i-1}+\theta _i = \theta _{i-1}-\beta \theta _i+\theta _{i+1}. \end{aligned}$$

The left-hand side of (25) is independent of i, and the result follows.

\(\mathrm{(ii)}\Rightarrow \mathrm{(i)} \) For \(2\le i \le d-1\), subtract the equation (25) at i from the corresponding equation obtained by replacing i by \(i-1\), to find (24) is zero. \(\square \)

Lemma 11.4

The following (i), (ii) hold for all \(\beta , \gamma \in {\mathbb {F}}\).

  1. (i)

    Suppose \(\lbrace \theta _i\rbrace _{i=0}^d\) is \((\beta ,\gamma )\)-recurrent. Then there exists \(\varrho \in {\mathbb {F}}\) such that \(\lbrace \theta _i\rbrace _{i=0}^d\) is \((\beta ,\gamma ,\varrho )\)-recurrent.

  2. (ii)

    Suppose \(\lbrace \theta _i\rbrace _{i=0}^d\) is \((\beta ,\gamma ,\varrho )\)-recurrent, and that \(\theta _{i-1}\not =\theta _{i+1}\) for \(1 \le i\le d-1\). Then \(\lbrace \theta _i\rbrace _{i=0}^d\) is \((\beta ,\gamma )\)-recurrent.

Proof

Let \(p_i\) denote the expression on the left in (26), and observe

$$\begin{aligned} p_i-p_{i+1}&= (\theta _{i-1}-\theta _{i+1})(\theta _{i-1}-\beta \theta _i +\theta _{i+1} - \gamma ) \end{aligned}$$

for \(1 \le i \le d-1\). Assertions (i), (ii) are both routine consequences of this. \(\square \)

12 Recurrent Sequences in Closed Form

In this section, we obtain some formula involving recurrent sequences. Let \(\overline{{\mathbb {F}}}\) denote the algebraic closure of \({\mathbb {F}}\). For \(q \in \overline{{\mathbb {F}}}\) let \({\mathbb {F}}[q ]\) denote the field extension of \({\mathbb {F}}\) generated by q.

Throughout this section let \(\beta \) and \(\lbrace \theta _i\rbrace _{i=0}^d\) denote scalars in \({\mathbb {F}}\).

Lemma 12.1

Assume that \(\lbrace \theta _i\rbrace _{i=0}^d\) is \(\beta \)-recurrent. Then the following (i)–(iii) hold.

  1. (i)

    Suppose \(\beta \not =2\), \(\beta \not =-2\), and pick \(0 \not =q \in \overline{{\mathbb {F}}}\) such that \(q+q^{-1}=\beta \). Then there exist scalars \(\alpha _1, \alpha _2, \alpha _3\) in \({\mathbb {F}}[q ]\) such that

    $$\begin{aligned} \theta _i = \alpha _1 + \alpha _2 q^i + \alpha _3 q^{-i} \qquad \qquad (0 \le i \le d). \end{aligned}$$
    (27)
  2. (ii)

    Suppose \(\beta = 2\). Then there exist \(\alpha _1, \alpha _2, \alpha _3 \) in \({\mathbb {F}} \) such that

    $$\begin{aligned} \theta _i = \alpha _1 + \alpha _2 i + \alpha _3 i(i-1)/2 \qquad \qquad (0 \le i \le d). \end{aligned}$$
    (28)
  3. (iii)

    Suppose \(\beta = -2\) and \(\mathrm{char}({\mathbb {F}}) \not =2\). Then there exist \(\alpha _1, \alpha _2, \alpha _3 \) in \({\mathbb {F}} \) such that

    $$\begin{aligned} \theta _i = \alpha _1 + \alpha _2 (-1)^i + \alpha _3 i(-1)^i \qquad \qquad (0 \le i \le d). \end{aligned}$$
    (29)

Referring to case (ii) above, if \(\mathrm{char}({\mathbb {F}}) =2\) then we interpret the expression \(i(i-1)/2\) as 0 if \(i=0\) or \(i=1\) (mod 4), and as 1 if \(i=2\) or \(i=3\) (mod 4).

Proof

  1. (i)

    We assume \(d\ge 2\); otherwise the result is trivial. Let q be given, and consider the equations (27) for \(i=0,1,2\). These equations are linear in \(\alpha _1, \alpha _2, \alpha _3\). We routinely find the coefficient matrix is nonsingular, so there exist \(\alpha _1, \alpha _2, \alpha _3\) in \({\mathbb {F}} [q ]\) such that (27) holds for \(i=0,1,2\). Using these scalars, let \(\varepsilon _i \) denote the left-hand side of (27) minus the right-hand side of (27), for \(0 \le i \le d\). On one hand \(\varepsilon _0\), \(\varepsilon _1\), \(\varepsilon _2\) are zero from the construction. On the other hand, one readily checks

    $$\begin{aligned} \varepsilon _{i-2}\,-\,(\beta +1)\varepsilon _{i-1}\,+\,(\beta +1)\varepsilon _i \,-\,\varepsilon _{i+1}=0 \end{aligned}$$

    for \(2 \le i \le d-1\). By these comments \(\varepsilon _i=0\) for \(0 \le i \le d\), and the result follows.

  2. (ii), (iii)

    Similar to the proof of (i) above.

\(\square \)

Lemma 12.2

Assume that \(\lbrace \theta _i\rbrace _{i=0}^d\) are mutually distinct and \(\beta \)-recurrent. Then (i)–(iv) hold below.

  1. (i)

    Suppose \(\beta \not =2\), \(\beta \not =-2\), and pick \(0 \not = q \in \overline{{\mathbb {F}}}\) such that \(q+q^{-1}=\beta \). Then \(q^i \not =1 \) for \(1 \le i \le d\).

  2. (ii)

    Suppose \(\beta = 2\) and \(\mathrm{char}({\mathbb {F}})=p\), \(p\ge 3\). Then \(d<p\).

  3. (iii)

    Suppose \(\beta = -2\) and \(\mathrm{char}({\mathbb {F}}) =p\), \(p\ge 3\). Then \(d<2p\).

  4. (iv)

    Suppose \(\beta = 0\) and \(\mathrm{char}({\mathbb {F}}) =2\). Then \(d\le 3\).

Proof

  1. (i)

    Using (27), we find \(q^i=1 \) implies \(\theta _i=\theta _0\) for \(1 \le i \le d\).

  2. (ii)

    Suppose \(d\ge p\). Setting \(i=p\) in (28) and recalling that p is congruent to 0 modulo p, we obtain \(\theta _p=\theta _0\), a contradiction. Hence \(d<p\).

  3. (iii)

    Suppose \(d\ge 2p\). Setting \(i=2p\) in (29) and recalling that p is congruent to 0 modulo p, we obtain \(\theta _{2p}=\theta _0\), a contradiction. Hence \(d<2p\).

  4. (iv)

    Suppose \(d\ge 4\). Setting \(i=4\) in (28), we find \(\theta _4=\theta _0\) in view of the comment at the end of Lemma 12.1. This is a contradiction, so \(d\le 3\).

\(\square \)

Lemma 12.3

(See [18, Lemma 9.4]). Assume that \(\lbrace \theta _i\rbrace _{i=0}^d\) are mutually distinct and \(\beta \)-recurrent. Pick any integers ijrs \((0 \le i,j,r,s \le d)\) such that \(i+j=r+s\), \(r\not =s\). Then (i)–(iv) hold below.

  1. (i)

    Suppose \(\beta \not =2\), \(\beta \not =-2\). Then

    $$\begin{aligned} \frac{\theta _i-\theta _{j}}{\theta _r-\theta _s} = \frac{q^{i}-q^j}{q^r-q^s}, \end{aligned}$$
    (30)

    where \(q+q^{-1}=\beta \).

  2. (ii)

    Suppose \(\beta = 2\) and \(\mathrm{char}({\mathbb {F}}) \not =2\). Then

    $$\begin{aligned} \frac{\theta _i-\theta _{j}}{\theta _r-\theta _s} = \frac{i-j}{r-s}. \end{aligned}$$
    (31)
  3. (iii)

    Suppose \(\beta = -2\) and \(\mathrm{char}({\mathbb {F}})\not =2\). Then

    $$\begin{aligned} \frac{\theta _i-\theta _{j}}{\theta _r-\theta _s} = \left\{ \begin{array}{ll} (-1)^{i+r} \frac{i-j}{r-s}, &{} \text {if }\;i+j\;\text { is even}; \\ (-1)^{i+r}, &{} \text {if }\;i+j\;\text { is odd.} \end{array} \right. \end{aligned}$$
    (32)
  4. (iv)

    Suppose \(\beta = 0\) and \(\mathrm{char}({\mathbb {F}}) =2\). Then

    $$\begin{aligned} \frac{\theta _i-\theta _{j}}{\theta _r-\theta _s} = \left\{ \begin{array}{ll} 0, &{} \text {if }\;i=j; \\ 1, &{} \text {if }\;i\not =j. \end{array} \right. \end{aligned}$$
    (33)

Proof

To get (i), evaluate the left-hand side of (30) using (27), and simplify the result. The cases (ii)–(iv) are similar. \(\square \)

13 A Sum

Throughout this section assume \(d\ge 1\). Let \(\beta \) and \(\lbrace \theta _i\rbrace _{i=0}^d\) denote scalars in \({\mathbb {F}}\) with \(\lbrace \theta _i\rbrace _{i=0}^d\) mutually distinct.

We consider the sums

$$\begin{aligned} \sum _{h=0}^{i-1} \frac{\theta _h-\theta _{d-h}}{\theta _0-\theta _d}, \end{aligned}$$
(34)

where \(0 \le i \le d+1\). Denoting the sum in (34) by \(\vartheta _i\), we have

$$\begin{aligned} \vartheta _0=0,\qquad \vartheta _1=1,\qquad \vartheta _d=1,\qquad \vartheta _{d+1}=0. \end{aligned}$$
(35)

Moreover

$$\begin{aligned} \vartheta _i= \vartheta _{d-i+1} \qquad \qquad (0 \le i \le d+1). \end{aligned}$$
(36)

The sums (34) play an important role a bit later, so we will examine them carefully. We begin by giving explicit formulas for the sums (34) under the assumption that \(\lbrace \theta _i\rbrace _{i=0}^d\) is \(\beta \)-recurrent. To avoid trivialities we assume that \(d\ge 3\).

Lemma 13.1

(See [18, Lemma 10.2]). Assume that \(\lbrace \theta _i\rbrace _{i=0}^d\) are mutually distinct and \(\beta \)-recurrent. Further assume that \(d\ge 3\). Then for \(0 \le i \le d+1\) we have the following.

  1. (i)

    Suppose \(\beta \not =2\), \(\beta \not =-2\). Then

    $$\begin{aligned} \sum _{h=0}^{i-1} \frac{\theta _h-\theta _{d-h}}{\theta _0-\theta _d} = \frac{q^i-1}{q-1}\,\frac{q^{d-i+1}-1}{q^d-1}, \end{aligned}$$
    (37)

    where \(q+q^{-1}=\beta \).

  2. (ii)

    Suppose \(\beta = 2\) and \(\mathrm{char}({\mathbb {F}}) \not =2\). Then

    $$\begin{aligned} \sum _{h=0}^{i-1} \frac{\theta _h-\theta _{d-h}}{\theta _0-\theta _d} = \frac{i(d-i+1)}{d}. \end{aligned}$$
    (38)
  3. (iii)

    Suppose \(\beta = -2\), \(\mathrm{char}({\mathbb {F}}) \not =2\), and d odd. Then

    $$\begin{aligned} \sum _{h=0}^{i-1} \frac{\theta _h-\theta _{d-h}}{\theta _0-\theta _d} = \left\{ \begin{array}{ll} 0, &{} \text {if }i\text { is even}; \\ 1, &{} \text {if }i\text { is odd.} \end{array} \right. \end{aligned}$$
    (39)
  4. (iv)

    Suppose \(\beta = -2\), \(\mathrm{char}({\mathbb {F}}) \not =2\), and d even. Then

    $$\begin{aligned} \sum _{h=0}^{i-1} \frac{\theta _h-\theta _{d-h}}{\theta _0-\theta _d} = \left\{ \begin{array}{ll} i/d, &{} \text {if }i\text { is even; } \\ (d-i+1)/d, \quad &{} \text {if }i\text { is odd. } \end{array} \right. \end{aligned}$$
    (40)
  5. (v)

    Suppose \(\beta = 0\), \(\mathrm{char}({\mathbb {F}}) =2\), and \(d=3\). Then

    $$\begin{aligned} \sum _{h=0}^{i-1} \frac{\theta _h-\theta _{d-h}}{\theta _0-\theta _d} = \left\{ \begin{array}{ll} 0, &{} \text {if }i \text { is even; } \\ 1, \quad &{} \text {if }i\text { is odd. } \end{array} \right. \end{aligned}$$
    (41)

Proof

The above sums can be computed directly from Lemma 12.3. \(\square \)

Note 13.2

Referring to Lemma 13.1, the cases (iii), (iv) can be handled in the following uniform way. Suppose \(\beta =-2\) and \(\mathrm{char}({\mathbb {F}}) \not =2\). Then for \(0 \le i \le d+1\),

$$\begin{aligned} \sum _{h=0}^{i-1} \frac{\theta _h-\theta _{d-h}}{\theta _0-\theta _d} = \frac{2d+1+(2i-2d-1)(-1)^i +(-1)^d + (2i-1)(-1)^{i+d}}{4d}. \end{aligned}$$

We make an observation.

Lemma 13.3

Assume that \(\lbrace \theta _i\rbrace _{i=0}^d\) are mutually distinct and \(\beta \)-recurrent. Define

$$\begin{aligned} \vartheta _i = \sum _{h=0}^{i-1} \frac{\theta _h-\theta _{d-h}}{\theta _0-\theta _d} \qquad \qquad (0 \le i \le d+1). \end{aligned}$$
(42)

Then the sequence \(\lbrace \vartheta _i\rbrace _{i=0}^{d+1}\) is \(\beta \)-recurrent.

Proof

For \(d=1\) there is nothing to prove. For \(d=2\) we have

$$\begin{aligned} \vartheta _0 - (\beta +1) \vartheta _1 + (\beta +1) \vartheta _2 - \vartheta _3 =0 \end{aligned}$$

since

$$\begin{aligned} \vartheta _0=0,\qquad \vartheta _1=1,\qquad \vartheta _2=1,\qquad \vartheta _{3}=0. \end{aligned}$$

For \(d\ge 3\) the result is obtained by examining the cases in Lemma 13.1. \(\square \)

Proposition 13.4

Assume that \(\lbrace \theta _i\rbrace _{i=0}^d \) are mutually distinct and \(\beta \)-recurrent. Then for scalars \(\lbrace \vartheta _i\rbrace _{i=0}^{d+1}\) in \({\mathbb {F}}\) the following are equivalent:

  1. (i)

    \(\displaystyle \vartheta _i = \vartheta _1 \sum _{h=0}^{i-1} \frac{\theta _h-\theta _{d-h}}{\theta _0 - \theta _d} \qquad \qquad (0 \le i \le d+1)\);

  2. (ii)

    the sequence \(\lbrace \vartheta _i\rbrace _{i=0}^{d+1}\) is \(\beta \)-recurrent and

    $$\begin{aligned} \vartheta _0=0, \qquad \quad \vartheta _1 =\vartheta _d, \qquad \quad \vartheta _{d+1} = 0. \end{aligned}$$
    (43)

Proof

\(\mathrm{(i)} \Rightarrow \mathrm{(ii)}\) The sequence \(\lbrace \vartheta _i\rbrace _{i=0}^{d+1}\) is \(\beta \)-recurrent by Lemma 13.3. Condition (43) follows from (35).

\(\mathrm{(ii)} \Rightarrow \mathrm{(i)}\) Define

$$\begin{aligned} \Delta _i = \vartheta _i - \vartheta _1 \sum _{h=0}^{i-1} \frac{\theta _h-\theta _{d-h}}{\theta _0 - \theta _d} \qquad \qquad (0 \le i \le d+1). \end{aligned}$$

We show that \(\Delta _i=0\) for \(0 \le i \le d+1\). By construction

$$\begin{aligned}&\Delta _0 = 0, \qquad \Delta _1 = 0, \qquad \Delta _d = 0, \qquad \Delta _{d+1} = 0. \end{aligned}$$
(44)

For the rest of the proof we assume that \(d\ge 3\); otherwise we are done. By construction and Lemma 13.3, the sequence \(\lbrace \Delta _i \rbrace _{i=0}^{d+1}\) is \(\beta \)-recurrent. We break the argument into cases.

Case \(\beta \not =2\), \(\beta \not =-2\). Pick \(0 \not = q \in \overline{{\mathbb {F}}}\) such that \(q+q^{-1}=\beta \). There exist \(\alpha _1\), \(\alpha _2\), \(\alpha _3\) in \({{\mathbb {F}}}[q ]\) such that

$$\begin{aligned} \Delta _i = \alpha _1 + \alpha _2 q^i + \alpha _3 q^{-i} \qquad \qquad (0 \le i \le d+1). \end{aligned}$$

Since \(\lbrace \theta _i \rbrace _{i=0}^d\) are mutually distinct and \(\beta \)-recurrent, we have \(q^i \not =1\) for \(1 \le i \le d\). The first three equations in (44) give

$$\begin{aligned} \left( \begin{array}{c} 0 \\ 0 \\ 0 \end{array} \right) = \left( \begin{array}{ccc} 1 &{} 1 &{} 1 \\ 1 &{} q &{} q^{-1} \\ 1 &{} q^d &{} q^{-d} \end{array} \right) \left( \begin{array}{c} \alpha _1 \\ \alpha _2 \\ \alpha _3 \end{array} \right) . \end{aligned}$$

For the above equation the coefficient matrix has determinant

$$\begin{aligned} (q-1)(q^d-1)(q^{d-1}-1)q^{-d}, \end{aligned}$$

which is nonzero. Therefore the coefficient matrix is invertible, so each of \(\alpha _1\), \(\alpha _2\), \(\alpha _3\) is zero. Consequently \(\Delta _i = 0\) for \(0 \le i \le d+1\).

Case \(\beta =2\) and \(\mathrm{char}({\mathbb {F}})\not =2\). There exist \(\alpha _1\), \(\alpha _2\), \(\alpha _3\) in \({{\mathbb {F}}}\) such that

$$\begin{aligned} \Delta _i = \alpha _1 + \alpha _2 i + \alpha _3 i(i-1)/2 \qquad \qquad (0 \le i \le d+1). \end{aligned}$$

Since \(\lbrace \theta _i \rbrace _{i=0}^d\) are mutully distinct and \(\beta \)-recurrent, we have \(\mathrm{char}({\mathbb {F}})=0\) or \(\mathrm{char}({\mathbb {F}})=p\) with \(d<p\). The first three equations in (44) give

$$\begin{aligned} \left( \begin{array}{c} 0 \\ 0 \\ 0 \end{array} \right) = \left( \begin{array}{ccc} 1 &{} 0 &{} 0 \\ 1 &{} 1 &{} 0 \\ 1 &{} d &{} d(d-1)/2 \end{array} \right) \left( \begin{array}{c} \alpha _1 \\ \alpha _2 \\ \alpha _3 \end{array} \right) . \end{aligned}$$

For the above equation the coefficient matrix has determinant \(d(d-1)/2\), which is nonzero. Therefore the coefficient matrix is invertible, so each of \(\alpha _1\), \(\alpha _2\), \(\alpha _3\) is zero. Consequently \(\Delta _i = 0\) for \(0 \le i \le d+1\).

Case \(\beta =-2\) and \(\mathrm{char}({\mathbb {F}})\not =2\). There exist \(\alpha _1\), \(\alpha _2\), \(\alpha _3\) in \({{\mathbb {F}}}\) such that

$$\begin{aligned} \Delta _i = \alpha _1 + \alpha _2 (-1)^i + \alpha _3 i (-1)^i \qquad \qquad (0 \le i \le d+1). \end{aligned}$$

Since \(\lbrace \theta _i \rbrace _{i=0}^d\) are mutully distinct and \(\beta \)-recurrent, we have

$$\begin{aligned} \mathrm{char}({\mathbb {F}})=0 \quad \mathrm{or} \quad \mathrm{char}({\mathbb {F}})=p, \quad d< 2p. \end{aligned}$$
(45)

The first three equations in (44) give

$$\begin{aligned} \left( \begin{array}{c} 0 \\ 0 \\ 0 \end{array} \right) = \left( \begin{array}{ccc} 1 &{} 1 &{} 0 \\ 1 &{} -1 &{} -1 \\ 1 &{} (-1)^d &{} d(-1)^d \end{array} \right) \left( \begin{array}{c} \alpha _1 \\ \alpha _2 \\ \alpha _3 \end{array} \right) . \end{aligned}$$

For the above equation, consider the determinant of the coefficient matrix. For even \(d=2n\) this determinant is \(-2^2n\), and for odd \(d=2n+1\) this determinant is \(2^2 n\). Note that \(2 \not =0\) in \({\mathbb {F}}\) since \(\mathrm{char}({\mathbb {F}})\not =2\). For either parity of d we have \(n\not =0\) in \({\mathbb {F}}\) by (45). So for either parity of d the determinant is nonzero. Therefore the coefficient matrix is invertible, so each of \(\alpha _1\), \(\alpha _2\), \(\alpha _3\) is zero. Consequently \(\Delta _i = 0\) for \(0 \le i \le d+1\).

Case \(\beta =0\) and \(\mathrm{char}({\mathbb {F}})=2\). We have \(d=3\) by Lemma 12.2(iv). There exist \(\alpha _1\), \(\alpha _2\), \(\alpha _3\) in \({{\mathbb {F}}}\) such that

$$\begin{aligned} \Delta _i = \alpha _1 + \alpha _2 i + \alpha _3 i (i-1)/2 \qquad \qquad (0 \le i \le 4), \end{aligned}$$

where \(i(i-1)/2\) is interpreted at the end of Lemma 12.1. The first three equations in (44) give

$$\begin{aligned} \left( \begin{array}{c} 0 \\ 0 \\ 0 \end{array} \right) = \left( \begin{array}{ccc} 1 &{} 0 &{} 0 \\ 1 &{} 1 &{} 0 \\ 1 &{} 1 &{} 1 \end{array} \right) \left( \begin{array}{c} \alpha _1 \\ \alpha _2 \\ \alpha _3 \end{array} \right) . \end{aligned}$$

In the above equation the coefficient matrix is invertible, so each of \(\alpha _1\), \(\alpha _2\), \(\alpha _3\) is zero. Consequently \(\Delta _i = 0\) for \(0 \le i \le d+1\). \(\square \)

14 The Polynomial P(xy)

Let \(\beta , \gamma , \varrho \) denote scalars in \({\mathbb {F}}\), and consider a polynomial in two variables

$$\begin{aligned} P(x,y)= x^2-\beta xy+y^2-\gamma (x+y)-\varrho . \end{aligned}$$
(46)

Note that \(P(x,y)= P(y,x)\). Let \(\lbrace \theta _i \rbrace _{i=0}^d\) denote scalars in \({\mathbb {F}}\).

Lemma 14.1

The following are equivalent:

  1. (i)

    \(P(\theta _{i-1},\theta _i)=0 \) for \(1 \le i \le d\);

  2. (ii)

    the sequence \(\lbrace \theta _i \rbrace _{i=0}^d\) is \((\beta , \gamma , \varrho )\)-recurrent.

Proof

By Definition 11.1(iv). \(\square \)

Proposition 14.2

Assume that \(\lbrace \theta _i\rbrace _{i=0}^d\) are mutually distinct and \((\beta , \gamma ,\varrho )\)-recurrent. Then the following hold:

  1. (i)

    \(P(x,\theta _j) = (x-\theta _{j-1})(x-\theta _{j+1}) \qquad (1 \le j \le d-1)\);

  2. (ii)

    for \(0 \le i,j\le d\), \(P(\theta _i, \theta _j) = 0\) implies \(|i-j|=1\) or \(i,j\in \lbrace 0,d\rbrace \).

Proof

  1. (i)

    The polynomial \(P(x,\theta _j)\) is monic in x, and has roots \(\theta _{j-1}\), \(\theta _{j+1}\) by Lemma 14.1.

  2. (ii)

    Assume that \(P(\theta _i,\theta _j)=0\). Also assume that \(1 \le i \le d-1\) or \(1 \le j \le d-1\); otherwise \(i,j\in \lbrace 0,d\rbrace \) and we are done. Interchanging ij if necessary, we may assume that \(1 \le j \le d-1\). Using (i) we have \(0= P(\theta _i,\theta _j)= (\theta _i-\theta _{j-1})(\theta _i-\theta _{j+1})\). Therefore \(i=j-1\) or \(i=j+1\), so \(|i-j|=1\).

\(\square \)

15 The Tridiagonal Relations

In this section, we consider a Leonard system

$$\begin{aligned} \Phi =(A; \lbrace E_i\rbrace _{i=0}^d; A^*; \lbrace E^*_i\rbrace _{i=0}^d) \end{aligned}$$

on V, with eigenvalue sequence \(\lbrace \theta _i \rbrace _{i=0}^d\) and dual eigenvalue sequence \(\lbrace \theta ^*_i \rbrace _{i=0}^d\). Our goal is to prove the following result.

Theorem 15.1

(See [18, Theorem 1.12]). There exists a sequence of scalars \(\beta ,\gamma ,\gamma ^*,\varrho ,\varrho ^*\) taken from \({\mathbb {F}}\) such that both

(TD1):

\(\qquad 0 = [A, A^2 A^*-\beta A A^* A+A^* A^2-\gamma \left( A A^*+A^* A\right) -\varrho A^* ]\),

(TD2):

\(\qquad 0=[A^*, A^{*2} A-\beta A^*AA^* +AA^{*2}-\gamma ^* \left( A^*A+AA^*\right) -\varrho ^* A]\).

The sequence is unique if \(d\ge 3\).

The relations (TD1), (TD2) are called the tridiagonal relations. They are displayed in [16, Lemma 5.4] and examined carefully in [17].

Lemma 15.2

For \(\beta , \gamma , \varrho \in {\mathbb {F}}\) the following are equivalent:

  1. (i)

    the scalars \(\beta , \gamma , \varrho \) satisfy (TD1);

  2. (ii)

    the sequence \(\lbrace \theta _i\rbrace _{i=0}^d\) is \((\beta ,\gamma , \varrho )\)-recurrent.

Proof

Let C denote the expression on the right in (TD1). We have

$$\begin{aligned} C = \sum _{i=0}^d \sum _{j=0}^d E_iCE_j. \end{aligned}$$

For \(0 \le i,j\le d\),

$$\begin{aligned} E_iCE_j = (\theta _i-\theta _j)P(\theta _i, \theta _j)E_iA^*E_j \end{aligned}$$
(47)

where P is from (46).

\(\mathrm{(i)} \Rightarrow \mathrm{(ii)}\) We have \(C=0\). So for \(1 \le j \le d\),

$$\begin{aligned} 0= E_{j-1}CE_j= (\theta _{j-1}-\theta _j)P(\theta _{j-1},\theta _j)E_{j-1}A^*E_j. \end{aligned}$$

By construction \(\theta _{j-1}\not =\theta _j\) and \(E_{j-1}A^*E_j\not =0\). Therefore \(P(\theta _{j-1},\theta _j)=0\). Consequently the sequence \(\lbrace \theta _i\rbrace _{i=0}^d\) is \((\beta ,\gamma , \varrho )\)-recurrent.

\(\mathrm{(ii)} \Rightarrow \mathrm{(i)}\) For \(0 \le i,j\le d\) the right-hand side of (47) has at least one zero factor, so \(E_iCE_j=0\). Consequently \(C=0\). \(\square \)

Lemma 15.3

The following (i)–(iii) hold for \(0 \le i,j\le d\):

  1. (i)

    \(E^*_i A^r E^*_j=0\) for \(0 \le r<\vert i-j\vert \);

  2. (ii)

    \(E^*_iA^rE^*_j \not =0\) for \(r=\vert i-j\vert \);

  3. (iii)

    for \(0 \le r,s\le d\),

    $$\begin{aligned}&E^*_i A^r A^* A^s E^*_j= {\left\{ \begin{array}{ll} \theta ^*_{j+s} E^*_i A^{r+s} E^*_j,&{} {\text {if }i-j=r+s}; \\ \theta ^*_{j-s}E^*_i A^{r+s} E^*_j,&{} {\text {if }j-i=r+s}; \\ 0,&{} {\text { if }\vert i-j\vert >r+s}. \end{array}\right. } \end{aligned}$$

Proof

For \(0 \le i \le d\) pick \(0 \not =v_i \in E^*_iV\). So \(\lbrace v_i \rbrace _{i=0}^d\) is a basis for V. Without loss of generality, we may identify each \(X\in \mathrm{End}(V)\) with the matrix in \(\mathrm{Mat}_{d+1}({\mathbb {F}})\) that represents X with respect to \(\lbrace v_i \rbrace _{i=0}^d\). From this point of view, A is irreducible tridiagonal and \(A^* = \mathrm{diag}(\theta ^*_0,\theta ^*_1,\ldots ,\theta ^*_d)\). Moreover for \(0 \le i \le d\), the matrix \(E^*_i\) is diagonal with (ii)-entry 1 and all other entries 0. Using these matrix representations, one routinely verifies the assertions (i)–(iii) in the lemma statement. \(\square \)

Recall that \({\mathcal {D}}\) is the subalgebra of \(\mathrm{End}(V)\) generated by A.

Lemma 15.4

Define

$$\begin{aligned} L_i = E_0 + E_1 + \cdots + E_i \qquad \qquad (0 \le i \le d). \end{aligned}$$

Then

  1. (i)

    \(\lbrace L_i \rbrace _{i=0}^d\) is a basis for the vector space \({\mathcal {D}}\);

  2. (ii)

    \(L_d=I\);

  3. (iii)

    for \(0 \le i \le d-1\),

    $$\begin{aligned} L_i A^*-A^*L_i = E_i A^*E_{i+1}-E_{i+1}A^*E_i. \end{aligned}$$

Proof

  1. (i)

    Since \(\lbrace E_i \rbrace _{i=0}^d\) is a basis for \({\mathcal {D}}\).

  2. (ii)

    Since \(I=\sum _{i=0}^d E_i\).

  3. (iii)

    For \(0\le j\le d-1\) we have

    $$\begin{aligned} E_jA^*&= E_j A^* (E_0 + \cdots + E_d) \nonumber \\&=E_jA^*E_{j-1}+E_jA^*E_j+E_jA^*E_{j+1} , \end{aligned}$$
    (48)

    where \(E_{-1}=0\). Similarly for \(0 \le j \le d-1\),

    $$\begin{aligned} A^*E_j&= E_{j-1}A^*E_j+E_jA^*E_j+E_{j+1}A^*E_j. \end{aligned}$$
    (49)

    Sum both (48) and (49) over \(j=0,1,\ldots ,i\) and take the difference between these two sums.

\(\square \)

Lemma 15.5

We have

$$\begin{aligned}&\mathrm{Span}\lbrace XA^*Y-YA^*X \,\vert \, X,Y\in {{\mathcal {D}}} \rbrace = \lbrace ZA^*-A^*Z\, \vert \, Z\in {{\mathcal {D}}} \rbrace . \end{aligned}$$

Proof

Using Lemma 15.4 we obtain

$$\begin{aligned} \mathrm{Span}\lbrace XA^*Y&-YA^*X \,|\, X,Y\in {{\mathcal {D}}} \rbrace \\&=\mathrm{Span}\lbrace E_iA^*E_j-E_jA^*E_i \,\vert \, 0\le i,j\le d \rbrace \\&= \mathrm{Span}\lbrace E_iA^*E_{i+1}-E_{i+1}A^*E_i\,\vert \,0\le i\le d-1 \rbrace \\&= \mathrm{Span}\lbrace L_iA^*-A^*L_i \,\vert \,0\le i\le d-1 \rbrace \\&=\{ ZA^*-A^*Z \,\vert \, Z\in {{\mathcal {D}}} \rbrace . \end{aligned}$$

\(\square \)

Proof of Theorem 15.1

First assume that \(d\ge 3\). By Lemma 15.5 (with \(X=A^2\) and \(Y=A\)) there exists \(Z\in {\mathcal {D}}\) such that

$$\begin{aligned} A^2A^*A-A A^*A^2 = ZA^*-A^*Z. \end{aligned}$$
(50)

Since \(\lbrace A^i \rbrace _{i=0}^d\) is a basis for \({\mathcal {D}}\), there exists a polynomial \(f\in {\mathbb {F}}[\lambda ]\) such that \(\mathrm{deg}(f)\le d\) and \(Z=f(A)\). Let k denote the degree of f.

We show that \(k=3\). We first assume that \(k<3\) and get a contradiction. We multiply each term in (50) on the left by \(E^*_3\) and on the right by \(E^*_0\). We evaluate the result using Lemma 15.3 to find \((\theta ^*_1-\theta ^*_2)E^*_3A^3E^*_0=0\). The scalar \( \theta ^*_1-\theta ^*_2\) is nonzero, and \(E^*_3A^3E^*_0\not =0\) by Lemma 15.3(ii). Therefore \( (\theta ^*_1-\theta ^*_2 )E^*_3A^3E^*_0\not =0\) for a contradiction. We have shown \(k\ge 3\). Let c denote the coefficient of \(\lambda ^k\) in f. By construction \(c\not =0\).

Next we assume that \(k>3\) and get a contradiction. We multiply each term in (50) on the left by \(E^*_k\) and on the right by \(E^*_0\). We evaluate the result using Lemma 15.3 to find \(0= c(\theta ^*_0-\theta ^*_k) E^*_kA^kE^*_0\). The scalars c and \( \theta ^*_0-\theta ^*_k\) are nonzero, and \(E^*_kA^kE^*_0\not =0\) by Lemma 15.3(ii). Therefore \( 0\not =c\left( \theta ^*_0-\theta ^*_k\right) E^*_kA^kE^*_0\) for a contradiction. We have shown \(k=3\).

Define \(\beta = c^{-1}-1\), so \(\beta +1=c^{-1}\). Multiply each term in (50) by \(c^{-1}\). The result is

$$\begin{aligned} (\beta +1)(A^2 A^* A-A A^*A^2) = A^3 A^* -A^*A^3 -\gamma (A^2 A^*-A^*A^2)-\varrho (AA^*-A^*A), \end{aligned}$$
(51)

where \(\gamma , \varrho \in {\mathbb {F}}\). The equation (51) is (TD1) in disguise; it is (TD1) with the commutator expanded. Therefore \(\beta , \gamma ,\varrho \) satisfy (TD1). Concerning (TD2), pick any integer i \((2 \le i\le d-1)\). We multiply each term in (51) on the left by \(E^*_{i-2}\) and on the right by \(E^*_{i+1}\). We evaluate the result using Lemma 15.3 to find that \(E^*_{i-2}A^3E^*_{i+1}\) times

$$\begin{aligned} \theta ^*_{i-2}-(\beta +1)\theta ^*_{i-1}+ (\beta +1)\theta ^*_i -\theta ^*_{i+1} \end{aligned}$$
(52)

is zero. We have \(E^*_{i-2}A^3E^*_{i+1}\not =0\) by Lemma 15.3(ii), so (52) is zero. Thus the sequence \(\lbrace \theta ^*_i\rbrace _{i=0}^d\) is \(\beta \)-recurrent. By Lemma 11.3 there exists \(\gamma ^* \in {\mathbb {F}}\) such that \(\lbrace \theta ^*_i\rbrace _{i=0}^d\) is \((\beta , \gamma ^*)\)-recurrent. By Lemma 11.4(i) there exists \(\varrho ^* \in {\mathbb {F}}\) such that \(\lbrace \theta ^*_i\rbrace _{i=0}^d\) is \((\beta , \gamma ^*, \varrho ^*)\)-recurrent. By this and Lemma 15.2 (applied to \(\Phi ^*\)) we see that \(\beta , \gamma ^*, \varrho ^*\) satisfy (TD2).

We have obtained scalars \(\beta , \gamma , \gamma ^*, \varrho , \varrho ^*\) in \({\mathbb {F}}\) that satisfy (TD1), (TD2). Next we show that these scalars are unique. Let \(\beta , \gamma , \gamma ^*, \varrho , \varrho ^*\) denote any scalars in \({\mathbb {F}}\) that satisfy (TD1), (TD2). By Lemma 15.2 the sequence \(\lbrace \theta _i\rbrace _{i=0}^d\) is \((\beta , \gamma , \varrho )\)-recurrent. By Lemma 11.4(ii) the sequence \(\lbrace \theta _i\rbrace _{i=0}^d\) is \((\beta , \gamma )\)-recurrent. By Lemma 11.3 the sequence \(\lbrace \theta _i\rbrace _{i=0}^d\) is \(\beta \)-recurrent. Also by Lemma 15.2 the sequence \(\lbrace \theta ^*_i\rbrace _{i=0}^d\) is \((\beta , \gamma ^*, \varrho ^*)\)-recurrent. By Lemma 11.4(ii) the sequence \(\lbrace \theta ^*_i\rbrace _{i=0}^d\) is \((\beta , \gamma ^*)\)-recurrent. By Lemma 11.3 the sequence \(\lbrace \theta ^*_i\rbrace _{i=0}^d\) is \(\beta \)-recurrent. By these comments and Definition 11.1,

  • the scalars

    $$\begin{aligned} \frac{\theta _{i-2}-\theta _{i+1}}{\theta _{i-1}-\theta _i},\qquad \frac{\theta ^*_{i-2}-\theta ^*_{i+1}}{\theta ^*_{i-1}-\theta ^*_i} \end{aligned}$$

    are both equal to \(\beta +1\) for \(2\le i\le d-1\);

  • \(\gamma =\theta _{i-1}-\beta \theta _i+\theta _{i+1} \qquad (1\le i\le d-1)\),

  • \(\gamma ^*=\theta ^*_{i-1}-\beta \theta ^*_i+\theta ^*_{i+1} \qquad (1\le i\le d-1)\),

  • \(\varrho = \theta ^2_{i-1}-\beta \,\theta _{i-1}\,\theta _i +\theta _{i}^2-\gamma \,(\theta _{i-1}+\theta _i) \qquad (1 \le i\le d)\),

  • \(\varrho ^*= \theta _{i-1}^{*2}-\beta \,\theta ^*_{i-1}\,\theta ^*_i +\theta _{i}^{*2}-\gamma ^*\,(\theta ^*_{i-1}+\theta ^*_i) \qquad ( 1 \le i \le d)\).

The above equations show that \(\beta , \gamma , \gamma ^*, \varrho , \varrho ^*\) are unique. We have proved the theorem under the assumption \(d\ge 3\).

Next assume that \(d\le 2\). Pick any \(\beta \in {\mathbb {F}}\). For \(d=2\) define \(\gamma =\theta _0-\beta \theta _1+\theta _2\) and for \(d\le 1\) pick any \(\gamma \in {\mathbb {F}}\). For \(d\ge 1\) define

$$\begin{aligned} \varrho = \theta _0^2-\beta \theta _0\theta _1 +\theta _1^2-\gamma (\theta _0+\theta _1) \end{aligned}$$

and for \(d=0\) pick any \(\varrho \in {\mathbb {F}}\). One checks that \(\lbrace \theta _i \rbrace _{i=0}^d\) is \((\beta ,\gamma ,\varrho )\)-recurrent. By Lemma 15.2 the scalars \(\beta , \gamma , \varrho \) satisfy (TD1). Replacing \(\Phi \) by \(\Phi ^*\) in the above argument, we obtain \(\gamma ^*, \varrho ^* \in {\mathbb {F}}\) such that \(\beta , \gamma ^*, \varrho ^*\) satisfy (TD2). \(\square \)

We emphasize one aspect of the above proof.

Corollary 15.6

(See [18, Lemma 12.7]). For the Leonard system \(\Phi \) the scalars

$$\begin{aligned} \frac{\theta _{i-2}-\theta _{i+1}}{\theta _{i-1}-\theta _i},\qquad \frac{\theta ^*_{i-2}-\theta ^*_{i+1}}{\theta ^*_{i-1}-\theta ^*_i} \end{aligned}$$
(53)

are equal and independent of i for \(2\le i\le d-1\).

16 The Tridiagonal Relations, Cont.

Throughout this section let \(\lbrace \theta _i\rbrace _{i=0}^d\), \(\lbrace \theta ^*_i\rbrace _{i=0}^d\), \(\lbrace \varphi _i\rbrace _{i=1}^d\) denote scalars in \({\mathbb {F}}\). Define matrices \(A, A^* \in \mathrm{Mat}_{d+1} ({\mathbb {F}})\) by

$$\begin{aligned} A = \left( \begin{array}{cccccc} \theta _0 &{} &{} &{} &{} &{} \mathbf{0} \\ 1 &{} \theta _1 &{} &{} &{} &{} \\ &{} 1 &{} \theta _2 &{} &{} &{} \\ &{}&{} \cdot &{} \cdot &{}&{} \\ &{} &{} &{} \cdot &{} \cdot &{} \\ \mathbf{0} &{} &{} &{} &{} 1 &{} \theta _d \end{array} \right) , \qquad \qquad A^* = \left( \begin{array}{cccccc} \theta ^*_0 &{} \varphi _1 &{} &{} &{} &{} \mathbf{0} \\ &{} \theta ^*_1 &{} \varphi _2 &{} &{} &{} \\ &{} &{} \theta ^*_2 &{} \cdot &{} &{} \\ &{}&{} &{} \cdot &{} \cdot &{} \\ &{} &{} &{} &{} \cdot &{} \varphi _d \\ \mathbf{0} &{} &{} &{} &{} &{} \theta ^*_d \end{array} \right) . \end{aligned}$$

Definition 16.1

Define the scalars

$$\begin{aligned} \vartheta _i = \varphi _i - (\theta ^*_i-\theta ^*_0)(\theta _{i-1}-\theta _d) \qquad \qquad (1 \le i \le d) \end{aligned}$$

and \(\vartheta _0=0\), \(\vartheta _{d+1}=0\).

Lemma 16.2

(See [18, Lemma 12.4]). Let \(\beta , \gamma , \varrho \) denote scalars in \({\mathbb {F}}\), and consider the commutator

$$\begin{aligned}{}[A,A^2A^*-\beta AA^*A + A^*A^2 -\gamma (AA^*+A^*A)-\varrho A^*]. \end{aligned}$$
(54)

Then the entries of (54) are as follows.

  1. (i)

    The \((i+1,i-2)\)-entry is

    $$\begin{aligned} \theta ^*_{i-2}\;-\;(\beta +1) \theta ^*_{i-1} \;+\;(\beta +1) \theta ^*_i \;-\;\theta ^*_{i+1} \end{aligned}$$

    for \(2 \le i \le d-1\).

  2. (ii)

    The \((i,i-2)\)-entry is

    $$\begin{aligned}&\vartheta _{i-2}\;-\;(\beta +1) \vartheta _{i-1} \;+\;(\beta +1) \vartheta _i \;-\;\vartheta _{i+1} \\&\quad +\;(\theta ^*_{i-2}-\theta ^*_0) (\theta _{i-3}\;-\;(\beta +1) \theta _{i-2} \;+\;(\beta +1) \theta _{i-1} \;-\;\theta _i) \\&\quad +\;(\theta _i-\theta _d) (\theta ^*_{i-2}\;-\;(\beta +1) \theta ^*_{i-1} \;+\;(\beta +1) \theta ^*_i \;-\;\theta ^*_{i+1}) \\&\quad +\;(\theta ^*_{i-2}-\theta ^*_i) (\theta _{i-2}\;-\;\beta \theta _{i-1} \;+\;\theta _i\;-\;\gamma ) \end{aligned}$$

    for \(2 \le i \le d\), where \(\lbrace \vartheta _i\rbrace _{i=0}^{d+1}\) are from Definition 16.1.

  3. (iii)

    The \((i,i-1)\)-entry is

    $$\begin{aligned}&\varphi _{i-1}(\theta _{i-2}-\beta \theta _{i-1}+\theta _i-\gamma )\;-\; \varphi _{i+1}(\theta _{i-1}-\beta \theta _i+\theta _{i+1}-\gamma ) \\&\quad +\;(\theta ^*_{i-1}-\theta ^*_i) (\theta ^2_{i-1}-\beta \theta _{i-1}\theta _i + \theta _i^2 -\gamma (\theta _{i-1} +\theta _i) -\varrho ) \end{aligned}$$

    for \(1 \le i \le d\).

  4. (iv)

    The (ii)-entry is

    $$\begin{aligned}&\varphi _i(\theta ^2_{i-1}-\beta \theta _{i-1}\theta _i + \theta _i^2 -\gamma (\theta _{i-1} +\theta _i) -\varrho ) \\&\quad -\;\varphi _{i+1}(\theta ^2_i-\beta \theta _i\theta _{i+1} + \theta _{i+1}^2 -\gamma (\theta _{i} +\theta _{i+1}) -\varrho ) \end{aligned}$$

    for \(0 \le i \le d\).

  5. (v)

    The \((i-1,i)\)-entry is

    $$\begin{aligned}&\varphi _i(\theta _{i-1}-\theta _i)(\theta ^2_{i-1}-\beta \theta _{i-1}\theta _i + \theta _i^2 -\gamma (\theta _{i-1} +\theta _i) -\varrho ) \end{aligned}$$

    for \(1 \le i \le d\).

All other entries in (54) are zero. In the above formulas, we assume \(\varphi _0=0\), \(\varphi _{d+1}=0\), and that \(\theta _{-1}\), \(\theta _{d+1}\), \(\theta ^*_{d+1}\) are indeterminates.

Proof

Routine matrix multiplication. \(\square \)

Lemma 16.3

Let \(\beta ,\gamma , \varrho \) denote scalars in \({\mathbb {F}}\). Assume that \(\lbrace \theta _i \rbrace _{i=0}^d\) are mutually distinct and \((\beta , \gamma ,\varrho )\)-recurrent. Assume that \(\lbrace \theta ^*_i \rbrace _{i=0}^d\) is \(\beta \)-recurrent. Then for the commutator (54) the \((i,i-2)\)-entry is

$$\begin{aligned} \vartheta _{i-2} - (\beta +1) \vartheta _{i-1} +(\beta +1) \vartheta _{i} - \vartheta _{i+1} \end{aligned}$$

for \(2 \le i \le d\), where \(\lbrace \vartheta _i\rbrace _{i=0}^{d+1}\) are from Definition16.1. All other entries in (54) are zero.

Proof

Examine the entries given in Lemma 16.2. \(\square \)

Proposition 16.4

With the notation and assumptions of Lemma 16.3,

$$\begin{aligned} 0 = [A,A^2A^*-\beta AA^*A + A^*A^2 -\gamma (AA^*+A^*A)-\varrho A^*]\end{aligned}$$

if and only if \(\lbrace \vartheta _i \rbrace _{i=0}^{d+1}\) is \(\beta \)-recurrent.

Proof

By Lemma 16.3. \(\square \)

17 The Proof of Theorem 10.1

In this section we prove Theorem 10.1.

Proof of Theorem 10.1

We may assume that \(d\ge 1\); otherwise the result is vacuous. Assume that there exists a Leonard system \(\Phi =(A; \lbrace E_i\rbrace _{i=0}^d; A^*; \lbrace E^*_i\rbrace _{i=0}^d)\) over \({\mathbb {F}}\) with parameter array (21). We show that this parameter array satisfies (PA1)–(PA5). Condition (PA1) holds since A and \(A^*\) are multiplicity-free. Condition (PA2) holds by the comment below Definition 9.1. Condition (PA5) holds by Corollary 15.6. By (PA5) and Lemma 11.2, there exists \(\beta \in {\mathbb {F}}\) such that \(\lbrace \theta _i \rbrace _{i=0}^d\) is \(\beta \)-recurrent and \(\lbrace \theta ^*_i \rbrace _{i=0}^d\) is \(\beta \)-recurrent. Concerning (PA3), Define

$$\begin{aligned} \vartheta _i = \varphi _i - (\theta ^*_i-\theta ^*_0)(\theta _{i-1}-\theta _d) \qquad \qquad (1 \le i \le d). \end{aligned}$$

We show that

$$\begin{aligned} \vartheta _i = \phi _1 \sum _{h=0}^{i-1} \frac{\theta _h - \theta _{d-h}}{\theta _0 - \theta _d} \qquad \qquad (1 \le i \le d). \end{aligned}$$
(55)

To this end we invoke Proposition 13.4. We will show (i) \(\vartheta _1=\phi _1\); (ii) \(\vartheta _d = \phi _1\); (iii) \(\lbrace \vartheta _i\rbrace _{i=0}^{d+1}\) is \(\beta \)-recurrent, where \(\vartheta _0 = 0 \) and \(\vartheta _{d+1}=0\). To show (i), we compare the formulas for \(a_0\) in Lemmas 8.1, 9.4 to obtain \(\varphi _1 - \phi _1 = (\theta ^*_1-\theta ^*_0)(\theta _0 - \theta _d)\). We have

$$\begin{aligned} \vartheta _1 = \varphi _1 - (\theta ^*_1-\theta ^*_0)(\theta _0 - \theta _d) = \phi _1. \end{aligned}$$

To show (ii), we compare the formulas for \(a^*_d\) in Lemmas 8.1, 9.4 to obtain \(\varphi _d - \phi _1 = (\theta ^*_d-\theta ^*_0)(\theta _{d-1} - \theta _d)\). We have

$$\begin{aligned} \vartheta _d = \varphi _d - (\theta ^*_d-\theta ^*_0)(\theta _{d-1} - \theta _d) = \phi _1. \end{aligned}$$

To show (iii) we apply Proposition 16.4 to the matrix representations of A, \(A^*\) from Proposition 7.6(iii). Recall that \(\lbrace \theta _i \rbrace _{i=0}^d\) is \(\beta \)-recurrent. By Lemma 11.3 there exists \(\gamma \in {\mathbb {F}}\) such that \(\lbrace \theta _i \rbrace _{i=0}^d\) is \((\beta , \gamma )\)-recurrent. By Lemma 11.4(i) there exists \(\varrho \in {\mathbb {F}}\) such that \(\lbrace \theta _i \rbrace _{i=0}^d\) is \((\beta , \gamma ,\varrho )\)-recurrent. The scalars \(\beta , \gamma , \varrho \) satisfy (TD1) by Lemma 15.2. The assumptions of Lemma 16.3 are satisfied, so by Proposition 16.4 the sequence \(\lbrace \vartheta _i\rbrace _{i=0}^{d+1}\) is \(\beta \)-recurrent. We have shown (i)–(iii). Now (55) holds by Proposition 13.4, so (PA3) holds. To obtain (PA4), apply (PA3) to the Leonard system \(\Phi ^\Downarrow \), and use Lemma 9.7. We have obtained (PA1)–(PA5), and we are done in one direction.

We now reverse the direction. Assume that the scalars (21) satisfy (PA1)–(PA5). We display a Leonard system \(\Phi \) over \({\mathbb {F}}\) that has parameter array (21). Recall the vector space V with dimension \(d+1\). Pick a basis \(\lbrace u_i \rbrace _{i=0}^d\) for V. Define \(A, A^* \in \mathrm{End}(V)\) with the following matrix representations with respect to \(\lbrace u_i \rbrace _{i=0}^d\):

$$\begin{aligned} A: \quad \left( \begin{array}{cccccc} \theta _0 &{} &{} &{} &{} &{} \mathbf{0} \\ 1 &{} \theta _1 &{} &{} &{} &{} \\ &{} 1 &{} \theta _2 &{} &{} &{} \\ &{}&{} \cdot &{} \cdot &{}&{} \\ &{} &{} &{} \cdot &{} \cdot &{} \\ \mathbf{0} &{} &{} &{} &{} 1 &{} \theta _d \end{array} \right) , \qquad \qquad A^*:\quad \left( \begin{array}{cccccc} \theta ^*_0 &{} \varphi _1 &{} &{} &{} &{} \mathbf{0} \\ &{} \theta ^*_1 &{} \varphi _2 &{} &{} &{} \\ &{} &{} \theta ^*_2 &{} \cdot &{} &{} \\ &{}&{} &{} \cdot &{} \cdot &{} \\ &{} &{} &{} &{} \cdot &{} \varphi _d \\ \mathbf{0} &{} &{} &{} &{} &{} \theta ^*_d \end{array} \right) . \end{aligned}$$

Observe that A (resp. \(A^*\)) is multiplicity-free with eigenvalues \(\lbrace \theta _i \rbrace _{i=0}^d\) (resp. \(\lbrace \theta ^*_i \rbrace _{i=0}^d\)); for \(0 \le i \le d\) let \(E_i\) (resp. \(E^*_i\)) denote the primitive idempotent of A (resp. \(A^*\)) for \(\theta _i\) (resp. \(\theta ^*_i\)). The sequence \(\Phi :=(A; \lbrace E_i\rbrace _{i=0}^d; A^*; \lbrace E^*_i\rbrace _{i=0}^d) \) is a pre Leonard system on V. We show that \(\Phi \) is a Leonard system on V. To this end we show the following (56)–(64) for \(0 \le i,j\le d\):

$$\begin{aligned}&E^*_i A E^*_j = 0 \quad \mathrm{if} \quad i-j > 1; \end{aligned}$$
(56)
$$\begin{aligned}&E^*_i A E^*_j = 0 \quad \mathrm{if} \quad j-i> 1; \end{aligned}$$
(57)
$$\begin{aligned}&E^*_i A E^*_j \not = 0 \quad \mathrm{if} \quad i-j=1; \end{aligned}$$
(58)
$$\begin{aligned}&E^*_i A E^*_j \not = 0 \quad \mathrm{if} \quad j-i=1 \end{aligned}$$
(59)

and

$$\begin{aligned}&E_i A^* E_j = 0 \quad \mathrm{if} \quad d> i-j> 1; \end{aligned}$$
(60)
$$\begin{aligned}&E_i A^* E_j = 0 \quad \mathrm{if} \quad d= i-j>1; \end{aligned}$$
(61)
$$\begin{aligned}&E_i A^* E_j = 0 \quad \mathrm{if} \quad j-i > 1; \end{aligned}$$
(62)
$$\begin{aligned}&E_i A^* E_j \not = 0 \quad \mathrm{if} \quad i-j=1; \end{aligned}$$
(63)
$$\begin{aligned}&E_i A^* E_j \not = 0 \quad \mathrm{if} \quad j-i=1. \end{aligned}$$
(64)

Proposition 7.6 implies (56), (58), (62). Lemma 7.7 implies (64). Before proceeding we make some comments. The element \(E^*_0\) is normalizing by Proposition 7.6. By (PA5) and Lemma 11.2, there exists \(\beta \in {\mathbb {F}}\) such that \(\lbrace \theta _i\rbrace _{i=0}^d\) is \(\beta \)-recurrent and \(\lbrace \theta ^*_i\rbrace _{i=0}^d\) is \(\beta \)-recurrent. By Lemma 11.3 there exists \(\gamma \in {\mathbb {F}}\) such that \(\lbrace \theta _i\rbrace _{i=0}^d\) is \((\beta ,\gamma )\)-recurrent. By Lemma 11.4(i) there exists \(\varrho \in {\mathbb {F}}\) such that \(\lbrace \theta _i\rbrace _{i=0}^d\) is \((\beta ,\gamma ,\varrho )\)-recurrent. By (PA3), for \(1 \le i \le d\) we have

$$\begin{aligned} \varphi _i - (\theta ^*_i - \theta ^*_0)(\theta _{i-1}-\theta _d) = \phi _1 \sum _{h=0}^{i-1} \frac{\theta _h - \theta _{d-h}}{\theta _0-\theta _d}; \end{aligned}$$

let \(\vartheta _i\) denote this common value. Note that \(\vartheta _1=\phi _1= \vartheta _d\). For notational convenience define \(\vartheta _0 = 0\) and \(\vartheta _{d+1} = 0\).

We show (60). The sequence \(\lbrace \vartheta _i\rbrace _{i=0}^{d+1}\) satisfies Proposition 13.4(i). By Proposition 13.4 the sequence \(\lbrace \vartheta _i\rbrace _{i=0}^{d+1}\) is \(\beta \)-recurrent. Next we apply Proposition 16.4. We mentioned earlier that \(\lbrace \theta _i\rbrace _{i=0}^d\) is \((\beta , \gamma , \varrho )\)-recurrent and \(\lbrace \theta ^*_i\rbrace _{i=0}^d\) is \(\beta \)-recurrent. By these comments and Proposition 16.4, the scalars \(\beta , \gamma , \varrho \) satisfy (TD1). For \(0 \le i,j\le d\) we multiply each term in (TD1) on the left by \(E_i\) and on the right by \(E_j\). This yields

$$\begin{aligned} 0 = E_i A^* E_j (\theta _i-\theta _j)P(\theta _i,\theta _j), \end{aligned}$$
(65)

where P is from (46). By Proposition 14.2(ii) we obtain \(P(\theta _i,\theta _j) \not =0\) if \(d> i-j>1\). By this and (65) we have \(E_i A^*E_j =0\) if \(d> i-j>1\). We have shown (60).

We show (61). We may assume \(d\ge 2\); otherwise there is nothing to prove. We show \(E_d A^*E_0=0\). Since \(E^*_0\) is normalizing, it suffices to show that \(E_d A^* E_0 E^*_0=0\) in view of Proposition 6.4. To show that \(E_d A^* E_0 E^*_0=0\), we invoke Proposition 8.4. By (60) we have \(E_d A^* E_i=0\) for \(1 \le i \le d-2\). We mentioned earlier that \(\vartheta _1 =\vartheta _d\). These comments and Proposition 8.4 imply \(E_d A^*E_0 E^*_0(\theta _0-\theta _{d-1})=0\). The scalar \(\theta _0-\theta _{d-1}\) is nonzero since \(d\ge 2\), so \(E_d A^*E_0 E^*_0=0\). We have shown (61).

We show (63). We must show that \(E_i A^*E_{i-1}\not =0\) for \(1 \le i \le d\). To this end we first apply Proposition 7.6 to \(\Phi ^\Downarrow \). Proposition 7.6(i) holds for \(\Phi ^\Downarrow \) by (56), (58), (60), (61). So Proposition 7.6(iii) holds for \(\Phi ^\Downarrow \). Consequently there exist scalars \(\lbrace \varphi ^\Downarrow _i \rbrace _{i=1}^d\) in \({\mathbb {F}}\) and a basis for V with respect to which

$$\begin{aligned} A: \quad \left( \begin{array}{cccccc} \theta _d &{} &{} &{} &{} &{} \mathbf{0} \\ 1 &{} \theta _{d-1} &{} &{} &{} &{} \\ &{} 1 &{} \theta _{d-2} &{} &{} &{} \\ &{}&{} \cdot &{} \cdot &{}&{} \\ &{} &{} &{} \cdot &{} \cdot &{} \\ \mathbf{0} &{} &{} &{} &{} 1 &{} \theta _0 \end{array} \right) , \qquad A^*:\quad \left( \begin{array}{cccccc} \theta ^*_0 &{} \varphi ^\Downarrow _1 &{} &{} &{} &{} \mathbf{0} \\ &{} \theta ^*_1 &{} \varphi ^\Downarrow _2 &{} &{} &{} \\ &{} &{} \theta ^*_2 &{} \cdot &{} &{} \\ &{}&{} &{} \cdot &{} \cdot &{} \\ &{} &{} &{} &{} \cdot &{} \varphi ^\Downarrow _d \\ \mathbf{0} &{} &{} &{} &{} &{} \theta ^*_d \end{array} \right) . \end{aligned}$$
(66)

We are trying to show that \(E_i A^*E_{i-1}\not =0\) for \(1 \le i \le d\). By Lemma 7.7 (applied to \(\Phi ^\Downarrow \)), it suffices to show that \(\varphi ^\Downarrow _i \not =0\) for \(1 \le i \le d\). To this end we show that \(\varphi ^\Downarrow _i = \phi _i\) for \(1 \le i \le d\). By (PA4),

$$\begin{aligned} \phi _i - (\theta ^*_i-\theta ^*_0)(\theta _{d-i+1}-\theta _0) = \varphi _1 \sum _{h=0}^{i-1} \frac{\theta _h - \theta _{d-h}}{\theta _0 - \theta _d} \qquad (1 \le i \le d). \end{aligned}$$

Define

$$\begin{aligned} \vartheta ^{\Downarrow }_i = \varphi ^\Downarrow _i - (\theta ^*_i - \theta ^*_0)(\theta _{d-i+1}-\theta _0) \qquad \qquad (1 \le i \le d). \end{aligned}$$

We show

$$\begin{aligned} \vartheta ^\Downarrow _i = \varphi _1 \sum _{h=0}^{i-1} \frac{\theta _h - \theta _{d-h}}{\theta _0 - \theta _d} \qquad \qquad (1 \le i \le d). \end{aligned}$$
(67)

To show (67), by Proposition 13.4 it suffices to show (i) \(\vartheta ^\Downarrow _1=\varphi _1\); (ii) \(\vartheta ^\Downarrow _1= \vartheta ^\Downarrow _d\); (iii) \(\lbrace \vartheta ^\Downarrow _i\rbrace _{i=0}^{d+1}\) is \(\beta \)-recurrent, where \(\vartheta ^\Downarrow _0=0\) and \(\vartheta ^\Downarrow _{d+1}=0\). To show (i), for \(\Phi \) and \(\Phi ^\Downarrow \) we compute \(a_0\) using Lemma 8.1; this yields \(a_0 = \theta _0 + \varphi _1(\theta ^*_0 - \theta ^*_1)^{-1}\) and \(a_0 = \theta _d + \varphi _1^\Downarrow (\theta ^*_0 - \theta ^*_1)^{-1}\). Combining these equations we obtain \(\varphi _1-\varphi ^\Downarrow _1 = (\theta ^*_1 - \theta ^*_0)(\theta _0 - \theta _d)\), so

$$\begin{aligned} \vartheta ^\Downarrow _1 = \varphi ^\Downarrow _1 - (\theta ^*_1 - \theta ^*_0)(\theta _d - \theta _0) = \varphi _1. \end{aligned}$$

To show (ii), we apply Proposition 8.4 to \(\Phi ^\Downarrow \). This gives

$$\begin{aligned} \sum _{i=0}^{d-2} E_0 A^* E_{d-i} E^*_0 (\theta _{d-i}-\theta _1)= E_0 E^*_0 (\vartheta ^\Downarrow _1 - \vartheta ^\Downarrow _d). \end{aligned}$$

For this equation the left-hand side is zero, since each summand is zero by (62). We mentioned earlier that \(E^*_0\) is normalizing, so \(E_0 E^*_0\not =0\) by Definition 6.1. By these comments \(\vartheta ^\Downarrow _1 =\vartheta ^\Downarrow _d\). To show (iii), we apply Proposition 16.4 to the matrices (66). Recall that \(\lbrace \theta _i \rbrace _{i=0}^d\) is \((\beta , \gamma , \varrho )\)-recurrent and \(\lbrace \theta ^*_i \rbrace _{i=0}^d\) is \(\beta \)-recurrent. We mentioned earlier that \(\beta , \gamma , \varrho \) satisfy (TD1). Applying Proposition 16.4 to the matrices (66), we see that \(\lbrace \vartheta ^\Downarrow _i \rbrace _{i=0}^{d+1}\) is \(\beta \)-recurrent. We have shown (i)–(iii), so (67) holds by Proposition 13.4. Now by the construction \(\varphi ^\Downarrow _i = \phi _i\) for \(1 \le i \le d\). Consequently \(\varphi ^\Downarrow _i \not = 0\) for \(1 \le i \le d\), so \(E_i A^*E_{i-1}\not =0\) for \(1 \le i \le d\). We have shown (63).

We show (57) and (59) by invoking Proposition 4.4. Consider the conditions (i)–(iv) in that proposition. Condition (i) holds by (56), (58). Condition (iii) holds by (60), (61), (63). Condition (iv) holds by (62), (64). Condition (ii) holds by Proposition 4.4, and this implies (57) and (59).

We have shown (56)–(64), so \(\Phi \) is a Leonard system on V. By construction \(\Phi \) has eigenvalue sequence \(\lbrace \theta _i \rbrace _{i=0}^d\), dual eigenvalue sequence \(\lbrace \theta ^*_i \rbrace _{i=0}^d\), and first split sequence \(\lbrace \varphi _i \rbrace _{i=1}^d\). By construction \(\lbrace \varphi ^\Downarrow _i \rbrace _{i=1}^d\) is the first split sequence of \(\Phi ^\Downarrow \) and hence the second split sequence of \(\Phi \). We showed \(\varphi ^\Downarrow _i = \phi _i \) for \(1 \le i \le d\), so \(\lbrace \phi _i \rbrace _{i=1}^d\) is the second split sequence for \(\Phi \). By these comments and Definition 9.6, the sequence (21) is the parameter array of \(\Phi \). By Proposition 9.8 the Leonard system \(\Phi \) is unique up to isomorphism. \(\square \)

18 Characterizations of Leonard Systems and Parameter Arrays

We are done discussing Theorem 10.1. In this section we discuss some related results concerning Leonard systems and parameter arrays.

We comment on notation. Let \(\lbrace u_i\rbrace _{i=0}^d\) and \(\lbrace v_i\rbrace _{i=0}^d\) denote bases for V. By the transition matrix from \(\lbrace u_i\rbrace _{i=0}^d\) to \(\lbrace v_i\rbrace _{i=0}^d\) we mean the matrix \(M \in \mathrm{Mat}_{d+1}({\mathbb {F}})\) such that \(v_j = \sum _{i=0}^d M_{ij} u_i\) for \(0 \le j \le d\).

The following result is a variation on [22, Theorem 5.1].

Theorem 18.1

Let \(\Phi =(A; \lbrace E_i\rbrace _{i=0}^d; A^*; \lbrace E^*_i\rbrace _{i=0}^d)\) denote a pre Leonard system on V, with eigenvalue sequence \(\lbrace \theta _i \rbrace _{i=0}^d\) and dual eigenvalue sequence \(\lbrace \theta ^*_i \rbrace _{i=0}^d\). Then \(\Phi \) is a Leonard system on V if and only if the following (i), (ii) hold:

  1. (i)

    there exist nonzero scalars \(\lbrace \varphi _i \rbrace _{i=1}^d\) in \({\mathbb {F}}\) and a basis for V with respect to which

    $$\begin{aligned} A: \quad \left( \begin{array}{cccccc} \theta _0 &{} &{} &{} &{} &{} \mathbf{0} \\ 1 &{} \theta _1 &{} &{} &{} &{} \\ &{} 1 &{} \theta _2 &{} &{} &{} \\ &{}&{} \cdot &{} \cdot &{}&{} \\ &{} &{} &{} \cdot &{} \cdot &{} \\ \mathbf{0} &{} &{} &{} &{} 1 &{} \theta _d \end{array} \right) , \qquad \qquad A^*: \quad \left( \begin{array}{cccccc} \theta ^*_0 &{} \varphi _1 &{} &{} &{} &{} \mathbf{0} \\ &{} \theta ^*_1 &{} \varphi _2 &{} &{} &{} \\ &{} &{} \theta ^*_2 &{} \cdot &{} &{} \\ &{}&{} &{} \cdot &{} \cdot &{} \\ &{} &{} &{} &{} \cdot &{} \varphi _d \\ \mathbf{0} &{} &{} &{} &{} &{} \theta ^*_d \end{array} \right) ; \end{aligned}$$
  2. (ii)

    there exist nonzero scalars \(\lbrace \phi _i \rbrace _{i=1}^d\) in \({\mathbb {F}}\) and a basis for V with respect to which

    $$\begin{aligned} A: \quad \left( \begin{array}{cccccc} \theta _d &{} &{} &{} &{} &{} \mathbf{0} \\ 1 &{} \theta _{d-1} &{} &{} &{} &{} \\ &{} 1 &{} \theta _{d-2} &{} &{} &{} \\ &{}&{} \cdot &{} \cdot &{}&{} \\ &{} &{} &{} \cdot &{} \cdot &{} \\ \mathbf{0} &{} &{} &{} &{} 1 &{} \theta _0 \end{array} \right) , \qquad \qquad A^*: \quad \left( \begin{array}{cccccc} \theta ^*_0 &{} \phi _1 &{} &{} &{} &{} \mathbf{0} \\ &{} \theta ^*_1 &{} \phi _2 &{} &{} &{} \\ &{} &{} \theta ^*_2 &{} \cdot &{} &{} \\ &{}&{} &{} \cdot &{} \cdot &{} \\ &{} &{} &{} &{} \cdot &{} \phi _d \\ \mathbf{0} &{} &{} &{} &{} &{} \theta ^*_d \end{array} \right) . \end{aligned}$$

In this case \(\bigl ( \lbrace \theta _i \rbrace _{i=0}^d; \lbrace \theta ^*_i \rbrace _{i=0}^d; \lbrace \varphi _i \rbrace _{i=1}^d; \lbrace \phi _i \rbrace _{i=1}^d \bigr )\) is the parameter array of \(\Phi \).

Proof

Apply Proposition 7.6 and Lemma 7.7 to both \(\Phi \) and \(\Phi ^\Downarrow \). \(\square \)

Using Theorem 18.1 we can easily recover the following result.

Theorem 18.2

(See [21, Theorem 3.2]). Consider a sequence

$$\begin{aligned} \bigl ( \lbrace \theta _i \rbrace _{i=0}^d; \lbrace \theta ^*_i \rbrace _{i=0}^d; \lbrace \varphi _i \rbrace _{i=1}^d; \lbrace \phi _i \rbrace _{i=1}^d \bigr ) \end{aligned}$$
(68)

of scalars in \({\mathbb {F}}\) that satisfies (PA1), (PA2). Then the following are equivalent:

  1. (i)

    the sequence (68) satisfies (PA3)–(PA5);

  2. (ii)

    there exists an invertible \(G \in \mathrm{Mat}_{d+1}({\mathbb {F}})\) such that both

    $$\begin{aligned}&G^{-1} \left( \begin{array}{cccccc} \theta _0 &{} &{} &{} &{} &{} \mathbf{0} \\ 1 &{} \theta _1 &{} &{} &{} &{} \\ &{} 1 &{} \theta _2 &{} &{} &{} \\ &{}&{} \cdot &{} \cdot &{}&{} \\ &{} &{} &{} \cdot &{} \cdot &{} \\ \mathbf{0} &{} &{} &{} &{} 1 &{} \theta _d \end{array} \right) G = \left( \begin{array}{cccccc} \theta _d &{} &{} &{} &{} &{} \mathbf{0} \\ 1 &{} \theta _{d-1} &{} &{} &{} &{} \\ &{} 1 &{} \theta _{d-2} &{} &{} &{} \\ &{}&{} \cdot &{} \cdot &{}&{} \\ &{} &{} &{} \cdot &{} \cdot &{} \\ \mathbf{0} &{} &{} &{} &{} 1 &{} \theta _0 \end{array} \right) ,\\&G^{-1} \left( \begin{array}{cccccc} \theta ^*_0 &{} \varphi _1 &{} &{} &{} &{} \mathbf{0} \\ &{} \theta ^*_1 &{} \varphi _2 &{} &{} &{} \\ &{} &{} \theta ^*_2 &{} \cdot &{} &{} \\ &{}&{} &{} \cdot &{} \cdot &{} \\ &{} &{} &{} &{} \cdot &{} \varphi _d \\ \mathbf{0} &{} &{} &{} &{} &{} \theta ^*_d \end{array} \right) G= \left( \begin{array}{cccccc} \theta ^*_0 &{} \phi _1 &{} &{} &{} &{} \mathbf{0} \\ &{} \theta ^*_1 &{} \phi _2 &{} &{} &{} \\ &{} &{} \theta ^*_2 &{} \cdot &{} &{} \\ &{}&{} &{} \cdot &{} \cdot &{} \\ &{} &{} &{} &{} \cdot &{} \phi _d \\ \mathbf{0} &{} &{} &{} &{} &{} \theta ^*_d \end{array} \right) . \end{aligned}$$

Proof

\(\mathrm{(i)} \Rightarrow \mathrm{(ii)}\) By Theorem 10.1 there exists a Leonard system on V with parameter array (68). The matrix G is the transition matrix from a basis for V that satisfies Theorem 18.1(i), to a basis for V that satisfies Theorem 18.1(ii).

\(\mathrm{(ii)} \Rightarrow \mathrm{(i)}\) Pick a basis \(\lbrace u_i \rbrace _{i=0}^d\) for V. Define \(A, A^* \in \mathrm{End}(V)\) whose matrix representations with respect to \(\lbrace u_i \rbrace _{i=0}^d\) are from Theorem 18.1(i). Observe that A (resp. \(A^*\)) is multiplicity-free with eigenvalues \(\lbrace \theta _i \rbrace _{i=0}^d\) (resp. \(\lbrace \theta ^*_i \rbrace _{i=0}^d\)); for \(0 \le i \le d\) let \(E_i\) (resp. \(E^*_i\)) denote the primitive idempotent of A (resp. \(A^*\)) for \(\theta _i\) (resp. \(\theta ^*_i\)). The sequence \(\Phi :=(A; \lbrace E_i\rbrace _{i=0}^d; A^*; \lbrace E^*_i\rbrace _{i=0}^d) \) is a pre Leonard system on V. We show that \(\Phi \) is a Leonard system on V. By linear algebra there exists a basis \(\lbrace v_i \rbrace _{i=0}^d\) of V such that G is the transition matrix from \(\lbrace u_i \rbrace _{i=0}^d\) to \(\lbrace v_i \rbrace _{i=0}^d\). The matrix representations of A and \(A^*\) with respect to \(\lbrace v_i \rbrace _{i=0}^d\) are shown in Theorem 18.1(ii). The pre Leonard system \(\Phi \) satisfies Theorem 18.1(i) and Theorem 18.1(ii), so by that theorem \(\Phi \) is a Leonard system on V. Also by that theorem (68) is the parameter array of \(\Phi \), so (68) satisfies (PA3)–(PA5). \(\square \)

19 The Intersection Numbers

We now bring in the intersection numbers. Throughout this section \(\Phi =(A; \lbrace E_i\rbrace _{i=0}^d; A^*; \lbrace E^*_i\rbrace _{i=0}^d)\) denotes a Leonard system on V, with parameter array \(\bigl ( \lbrace \theta _i \rbrace _{i=0}^d; \lbrace \theta ^*_i \rbrace _{i=0}^d; \lbrace \varphi _i \rbrace _{i=1}^d; \lbrace \phi _i \rbrace _{i=1}^d \bigr )\). Applying Lemma 7.5 to \(\Phi ^*\) we see that \(E_0\) is normalizing. For \(0 \not =\xi \in E_0V\) the vectors \(\lbrace E^*_i \xi \rbrace _{i=0}^d\) form a basis for V, said to be \(\Phi \)-standard. With respect to this basis

$$\begin{aligned} A: \quad \left( \begin{array}{cccccc} a_0 &{} b_0 &{} &{} &{} &{} \mathbf{0} \\ c_1 &{} a_1 &{} b_1 &{} &{} &{} \\ &{} c_2 &{} \cdot &{} \cdot &{} &{} \\ &{}&{} \cdot &{} \cdot &{}\cdot &{} \\ &{} &{} &{} \cdot &{} \cdot &{} b_{d-1} \\ \mathbf{0} &{} &{} &{} &{} c_d &{} a_d \end{array} \right) , \qquad \qquad A^*: \quad \mathrm{diag}(\theta ^*_0,\theta ^*_1,\ldots , \theta ^*_d), \end{aligned}$$

where \(\lbrace a_i\rbrace _{i=0}^d\) are from Definition 3.3 and \(\lbrace b_i\rbrace _{i=0}^{d-1}\), \(\lbrace c_i\rbrace _{i=1}^{d}\) are nonzero scalars in \({\mathbb {F}}\). The vector \(\xi \) is an eigenvector for A with eigenvalue \(\theta _0\). Moreover \(\xi =\sum _{i=0}^d E^*_i\xi \). Consequently

$$\begin{aligned} c_i + a_i + b_i = \theta _0 \qquad \qquad (0 \le i \le d), \end{aligned}$$
(69)

where \(c_0=0\) and \(b_d=0\). We call \(\lbrace b_i\rbrace _{i=0}^{d-1}\), \(\lbrace c_i\rbrace _{i=1}^{d}\) the intersection numbers of \(\Phi \). They are discussed in [20, Sect. 11]. The intersection numbers \(\lbrace b^*_i\rbrace _{i=0}^{d-1}\), \(\lbrace c^*_i\rbrace _{i=1}^{d}\) of \(\Phi ^*\) are called the dual intersection numbers of \(\Phi \). By construction

$$\begin{aligned} c^*_i + a^*_i + b^*_i = \theta ^*_0 \qquad \qquad (0 \le i \le d), \end{aligned}$$
(70)

where \(c^*_0=0\) and \(b^*_d=0\). With respect to a \(\Phi ^*\)-standard basis for V,

$$\begin{aligned} A: \quad \mathrm{diag}(\theta _0,\theta _1,\ldots , \theta _d), \qquad \qquad A^*: \quad \left( \begin{array}{cccccc} a^*_0 &{} b^*_0 &{} &{} &{} &{} \mathbf{0} \\ c^*_1 &{} a^*_1 &{} b^*_1 &{} &{} &{} \\ &{} c^*_2 &{} \cdot &{} \cdot &{} &{} \\ &{}&{} \cdot &{} \cdot &{}\cdot &{} \\ &{} &{} &{} \cdot &{} \cdot &{} b^*_{d-1} \\ \mathbf{0} &{} &{} &{} &{} c^*_d &{} a^*_d \end{array} \right) . \end{aligned}$$
(71)

We mention a handy recurrence.

Lemma 19.1

Assume \(d\ge 1\). Then for \(0 \le i \le d\) we have

$$\begin{aligned}&c_i \theta ^*_{i-1} + a_i \theta ^*_{i} + b_i \theta ^*_{i+1} = \theta _1 \theta ^*_i + a^*_0(\theta _0-\theta _1), \end{aligned}$$
(72)
$$\begin{aligned}&c^*_i \theta _{i-1} + a^*_i \theta _{i} + b^*_i \theta _{i+1} = \theta ^*_1 \theta _i + a_0(\theta ^*_0-\theta ^*_1), \end{aligned}$$
(73)

where \(\theta _{-1}\), \(\theta _{d+1}\), \(\theta ^*_{-1}\), \(\theta ^*_{d+1}\) denote indeterminates.

Proof

The proof of (72) is similar to the proof of (69). To obtain (72), use the fact that for \(0 \not =\xi \in E_0V\) the vector \((A^*-a^*_0 I) \xi \) is an eigenvector for A with eigenvalue \(\theta _1\), and \((A^*-a^*_0 I) \xi = \sum _{i=0}^d (\theta ^*_i-a^*_0)E^*_i \xi \). To get (73), apply (72) to \(\Phi ^*\). \(\square \)

Lemma 19.2

For \(d\ge 1\) we have

$$\begin{aligned} b_0&= \theta _0-a_0, \\ b_i&= \frac{(a_i - \theta _0)(\theta ^*_i - \theta ^*_{i-1})+ (\theta _0-\theta _1)(\theta ^*_i-a^*_0)}{\theta ^*_{i-1} - \theta ^*_{i+1}} \qquad (1 \le i \le d-1), \\ c_i&= \frac{(a_i - \theta _0)(\theta ^*_i - \theta ^*_{i+1})+ (\theta _0-\theta _1)(\theta ^*_i-a^*_0)}{\theta ^*_{i+1} - \theta ^*_{i-1}} \qquad (1 \le i \le d-1), \\ c_d&= \theta _0 - a_d. \end{aligned}$$

To get \(\lbrace b^*_i\rbrace _{i=0}^{d-1}\) and \(\lbrace c^*_i\rbrace _{i=1}^d\), exchange starred and nonstarred symbols everywhere in the above equations.

Proof

To obtain \(b_0\) set \(i=0\) in (69). To obtain \(b_i\), \(c_i\) for \(1\le i \le d-1\), solve the linear equations (69), (72). To obtain \(c_d\) set \(i=d\) in (69). To obtain \(\lbrace b^*_i\rbrace _{i=0}^{d-1}\) and \(\lbrace c^*_i\rbrace _{i=1}^d\), apply the above comments to \(\Phi ^*\). \(\square \)

In Lemma 19.1 we gave a recurrence. We now give a more general recurrence.

Lemma 19.3

For \(0 \le i\le d\) and \(1\le j \le d\),

$$\begin{aligned} c_i \tau ^*_j(\theta ^*_{i-1})+ a_i \tau ^*_j(\theta ^*_{i})+ b_i \tau ^*_j(\theta ^*_{i+1})&= \theta _j \tau ^*_j(\theta ^*_i) + \varphi _j \tau ^*_{j-1}(\theta ^*_i), \end{aligned}$$
(74)
$$\begin{aligned} c_i \eta ^*_j(\theta ^*_{i-1})+ a_i \eta ^*_j(\theta ^*_{i})+ b_i \eta ^*_j(\theta ^*_{i+1})&= \theta _j \eta ^*_j(\theta ^*_i) + \phi _{d-j+1} \eta ^*_{j-1}(\theta ^*_i), \end{aligned}$$
(75)
$$\begin{aligned} c^*_i \tau _j(\theta _{i-1})+ a^*_i \tau _j(\theta _{i})+ b^*_i \tau _j(\theta _{i+1})&= \theta ^*_j \tau _j(\theta _i) + \varphi _j \tau _{j-1}(\theta _i), \end{aligned}$$
(76)
$$\begin{aligned} c^*_i \eta _j(\theta _{i-1})+ a^*_i \eta _j(\theta _{i})+ b^*_i \eta _j(\theta _{i+1})&= \theta ^*_j \eta _j(\theta _i) + \phi _j \eta _{j-1}(\theta _i), \end{aligned}$$
(77)

where \(\theta _{-1}\), \(\theta _{d+1}\), \(\theta ^*_{-1}\), \(\theta ^*_{d+1}\) denote indeterminates.

Proof

Concerning (76), Let \(T \in \mathrm{Mat}_{d+1}({\mathbb {F}})\) have (ij)-entry \(\tau _j(\theta _i)\) for \(0 \le i,j\le d\). For \(0 \not =\xi \in E^*_0V\), T is the transition matrix from the \(\Phi ^*\)-standard basis \(\lbrace E_i \xi \rbrace _{i=0}^d\) to the basis \(\lbrace \tau _i(A)\xi \rbrace _{i=0}^d\). The matrix \(B^*\) on the right in (71) represents \(A^*\) with respect to \(\lbrace E_i \xi \rbrace _{i=0}^d\). The matrix \(D^*\) on the right in (12) represents \(A^*\) with respect to \(\lbrace \tau _i(A)\xi \rbrace _{i=0}^d\). By linear algebra \(B^*T = T D^*\). The entries of this matrix give the equations (76). To obtain (77), in the above argument replace the basis \(\lbrace \tau _i(A)\xi \rbrace _{i=0}^d\) by the basis \(\lbrace \eta _i(A)\xi \rbrace _{i=0}^d\), and replace the matrix on the right in (12) by the matrix on the right in (20). To obtain (74) and (75), apply (76) and (77) to \(\Phi ^*\). \(\square \)

Lemma 19.4

(See [20, Theorem 17.7]). We have

  1. (i)

    \(b_i = \varphi _{i+1} \frac{\tau ^*_i(\theta ^*_i)}{\tau ^*_{i+1}(\theta ^*_{i+1})} \qquad (0 \le i \le d-1)\);

  2. (ii)

    \(c_i = \phi _{i} \frac{\eta ^*_{d-i}(\theta ^*_i)}{\eta ^*_{d-i+1}(\theta ^*_{i-1})} \qquad (1 \le i \le d)\);

  3. (iii)

    \(b^*_i = \varphi _{i+1} \frac{\tau _i(\theta _i)}{\tau _{i+1}(\theta _{i+1})} \qquad (0 \le i \le d-1)\);

  4. (iv)

    \(c^*_i = \phi _{d-i+1} \frac{\eta _{d-i}(\theta _i)}{\eta _{d-i+1}(\theta _{i-1})} \qquad (1 \le i \le d)\).

Proof

  1. (i)

    Set \(j=i+1\) in (74).

  2. (ii)

    Set \(j=d-i+1\) in (75).

  3. (iii)

    Set \(j=i+1\) in (76).

  4. (ii)

    Set \(j=d-i+1\) in (77).

\(\square \)

In Lemma 19.4 we see some fractions. In order to simplify these fractions, we consider the products

$$\begin{aligned} \psi _i = \prod _{h=0}^{i-2} \frac{\theta _i - \theta _{h+1}}{\theta _{i+1}-\theta _h} \qquad \qquad (1 \le i \le d-1). \end{aligned}$$
(78)

Note that \(\psi _1 = 1\). Using Definition 5.1 we find that for \(1 \le i \le d-1\),

$$\begin{aligned} \frac{\tau _i(\theta _i)}{\tau _{i+1}(\theta _{i+1})} = \frac{\psi _i (\theta _i - \theta _0)}{(\theta _{i+1}-\theta _i)(\theta _{i+1}-\theta _{i-1})}. \end{aligned}$$
(79)

We now describe the scalars \(\lbrace \psi _i \rbrace _{i=1}^{d-1}\) in detail. To avoid trivialities we assume that \(d\ge 3\). Let \(\beta +1\) denote the common value of (22).

Lemma 19.5

For \(d\ge 3\) and \(1 \le i \le d-1\) we have the following.

  1. (i)

    Suppose \(\beta \not =2\), \(\beta \not =-2\). Then

    $$\begin{aligned} \psi _i = q^{i-1}\frac{q-1}{q^i-1}\,\frac{q^{2}-1}{q^{i+1}-1}, \end{aligned}$$

    where \(q+q^{-1}=\beta \).

  2. (ii)

    Suppose \(\beta = 2\) and \(\mathrm{char}({\mathbb {F}}) \not =2\). Then

    $$\begin{aligned} \psi _i = \frac{2}{i(i+1)}. \end{aligned}$$
  3. (iii)

    Suppose \(\beta = -2\) and \(\mathrm{char}({\mathbb {F}}) \not =2\). Then

    $$\begin{aligned} \psi _i = \left\{ \begin{array}{ll} -2/i, &{} \text {if }i\text { is even; } \\ 2/(i+1), &{} \text {if }i\text { is odd.} \end{array} \right. \end{aligned}$$
  4. (iv)

    Suppose \(\beta = 0\), \(\mathrm{char}({\mathbb {F}}) =2\), and \(d=3\). Then

    $$\begin{aligned} \psi _i = 1. \end{aligned}$$

Proof

Evaluate the right-hand side of (78) using Lemma 12.3, and simplify the result. \(\square \)

Note 19.6

Under the assumptions of Lemma 19.5(iii),

$$\begin{aligned} \psi _i =\frac{4 (-1)^i}{(-1)^i-1-2i} \qquad \qquad (1 \le i \le d-1). \end{aligned}$$

Corollary 19.7

For \(1\le i \le d-1\),

$$\begin{aligned} \frac{\tau ^*_i(\theta ^*_i)}{\tau ^*_{i+1}(\theta ^*_{i+1})}&= \frac{\psi _i (\theta ^*_i - \theta ^*_0)}{(\theta ^*_{i+1}-\theta ^*_i)(\theta ^*_{i+1}-\theta ^*_{i-1})}, \\ \frac{\eta _{d-i}(\theta _i)}{\eta _{d-i+1}(\theta _{i-1})}&= \frac{\psi _{d-i} (\theta _i - \theta _d)}{(\theta _{i-1}-\theta _i)(\theta _{i-1}-\theta _{i+1})}, \\ \frac{\eta ^*_{d-i}(\theta ^*_i)}{\eta ^*_{d-i+1}(\theta ^*_{i-1})}&= \frac{\psi _{d-i} (\theta ^*_i - \theta ^*_d)}{(\theta ^*_{i-1}-\theta ^*_i)(\theta ^*_{i-1}-\theta ^*_{i+1})}. \end{aligned}$$

Proof

The result is vacuous for \(d\le 1\) and trivial for \(d=2\). Next assume that \(d\ge 3\). From the data in Lemma 19.5, we see that for \(1 \le i \le d-1\) the scalar \(\psi _i\) depends only on i and \(\beta \). Consequently \(\psi _i\) is unchanged if we replace \(\Phi \) by \(\Phi ^*\) or \(\Phi ^\Downarrow \). The result follows from these comments and (79). \(\square \)

Proposition 19.8

For \(d\ge 1\) we have

$$\begin{aligned} b_0&= \frac{\varphi _1}{\theta ^*_1-\theta ^*_0}, \\ b_i&= \frac{\varphi _{i+1} \psi _i (\theta ^*_i-\theta ^*_0)}{ (\theta ^*_{i+1}-\theta ^*_i) (\theta ^*_{i+1}-\theta ^*_{i-1}) } \qquad (1 \le i \le d-1), \\ c_i&= \frac{\phi _{i} \psi _{d-i} (\theta ^*_i-\theta ^*_d)}{ (\theta ^*_{i-1}-\theta ^*_i) (\theta ^*_{i-1}-\theta ^*_{i+1}) } \qquad (1 \le i \le d-1), \\ c_d&= \frac{\phi _d}{\theta ^*_{d-1}-\theta ^*_d} \end{aligned}$$

and

$$\begin{aligned} b^*_0&= \frac{\varphi _1}{\theta _1-\theta _0}, \\ b^*_i&= \frac{\varphi _{i+1} \psi _i (\theta _i-\theta _0)}{ (\theta _{i+1}-\theta _i) (\theta _{i+1}-\theta _{i-1}) } \qquad (1 \le i \le d-1), \\ c^*_i&= \frac{\phi _{d-i+1} \psi _{d-i} (\theta _i-\theta _d)}{ (\theta _{i-1}-\theta _i) (\theta _{i-1}-\theta _{i+1}) } \qquad (1 \le i \le d-1), \\ c^*_d&= \frac{\phi _1}{\theta _{d-1}-\theta _d}. \end{aligned}$$

Proof

Evaluate the formulas in Lemma 19.4 using (79) and Corollary 19.7. \(\square \)

We mention some attractive formulas involving the intersection numbers; similar formulas apply to the dual intersection numbers.

Lemma 19.9

The following hold:

  1. (i)

    for \(0 \le i \le d-1\),

    $$\begin{aligned} \frac{ \theta _0 + \theta _1 + \cdots + \theta _i - a_0 -a_1-\cdots - a_i}{b_i} = \prod _{h=0}^{i-1}\frac{\theta ^*_{i+1}-\theta ^*_h}{\theta ^*_i - \theta ^*_h}; \end{aligned}$$
    (80)
  2. (ii)

    for \(1 \le i \le d\),

    $$\begin{aligned} \frac{ a_0 +a_1+\cdots + a_{i-1} - \theta _d - \theta _{d-1} - \cdots - \theta _{d-i+1} }{c_i} = \prod _{h=i+1}^{d}\frac{\theta ^*_{i-1}-\theta ^*_h}{\theta ^*_i - \theta ^*_h}. \end{aligned}$$
    (81)

Proof

  1. (i)

    To verify (80), eliminate \(b_i\) using Lemma 19.4(i), and evaluate the result using Lemma 8.2.

  2. (ii)

    To verify (81), eliminate \(c_i\) using Lemma 19.4(ii), and evaluate the result using Lemma 9.5.

\(\square \)

Corollary 19.10

For \(d\ge 1\),

$$\begin{aligned}&b_{d-1} = (a_d-\theta _d)\frac{ (\theta ^*_{d-1} - \theta ^*_0) (\theta ^*_{d-1} - \theta ^*_1) \cdots (\theta ^*_{d-1} - \theta ^*_{d-2})}{(\theta ^*_d - \theta ^*_0) (\theta ^*_d - \theta ^*_1) \cdots (\theta ^*_d - \theta ^*_{d-2})}, \end{aligned}$$
(82)
$$\begin{aligned}&c_1 = (a_0 - \theta _d)\frac{ (\theta ^*_1 - \theta ^*_2) (\theta ^*_1 - \theta ^*_3) \cdots (\theta ^*_1 - \theta ^*_d)}{(\theta ^*_0 - \theta ^*_2) (\theta ^*_0 - \theta ^*_3) \cdots (\theta ^*_0 - \theta ^*_d)}. \end{aligned}$$
(83)

Proof

To get (82) set \(i=d-1\) in (80), and simplify the result using (4). To get (83) set \(i=1\) in (81). \(\square \)

Next we obtain \(\lbrace \theta _i \rbrace _{i=0}^d\) in terms of the intersection numbers and dual eigenvalues. We give two versions, that resemble [3, Corollary 8.3.3].

Lemma 19.11

For \(d\ge 1\),

$$\begin{aligned} \theta _0&= a_0 + b_0, \\ \theta _i&= a_i + b_i \prod _{h=0}^{i-1}\frac{\theta ^*_{i+1}-\theta ^*_h}{\theta ^*_i-\theta ^*_h} - b_{i-1} \prod _{h=0}^{i-2}\frac{\theta ^*_{i}-\theta ^*_h}{\theta ^*_{i-1}-\theta ^*_h} \qquad (1 \le i \le d-1), \\ \theta _d&= a_d - b_{d-1} \prod _{h=0}^{d-2}\frac{\theta ^*_{d}-\theta ^*_h}{\theta ^*_{d-1}-\theta ^*_h}. \end{aligned}$$

Proof

To find \(\theta _i\) for \(0 \le i \le d-1\), solve (80) for \(\theta _i\) and use induction on i. The formula for \(\theta _d\) comes from (82). \(\square \)

Lemma 19.12

For \(d\ge 1\),

$$\begin{aligned} \theta _0&= a_d + c_d, \\ \theta _i&= a_{d-i} + c_{d-i} \prod _{h=d-i+1}^{d}\frac{\theta ^*_{d-i-1}-\theta ^*_h}{\theta ^*_{d-i}-\theta ^*_h} - c_{d-i+1} \prod _{h=d-i+2}^{d}\frac{\theta ^*_{d-i}-\theta ^*_h}{\theta ^*_{d-i+1}-\theta ^*_h} \qquad (1 \le i \le d-1), \\ \theta _d&= a_0 - c_1 \prod _{h=2}^d\frac{\theta ^*_0-\theta ^*_h}{\theta ^*_1-\theta ^*_h}. \end{aligned}$$

Proof

To find \(\theta _d, \theta _{d-1},\ldots , \theta _1\) use (81). The formula for \(\theta _0\) comes from (69) at \(i=d\). \(\square \)

Next we obtain some results about duality.

Lemma 19.13

For \(0 \le i,j,r,s\le d\) such that \(i+j=r+s\) and \(r\not =s\),

$$\begin{aligned} \frac{\theta _i - \theta _j}{\theta _r-\theta _s} = \frac{\theta ^*_i - \theta ^*_j}{\theta ^*_r-\theta ^*_s}. \end{aligned}$$

Proof

Use the data in Lemma 12.3. \(\square \)

Lemma 19.14

For \(0 \le i \le d-1\),

$$\begin{aligned}&b_i \frac{ (\theta ^*_{i+1}-\theta ^*_0) (\theta ^*_{i+1}-\theta ^*_1) \cdots (\theta ^*_{i+1}-\theta ^*_i)}{(\theta ^*_i-\theta ^*_0) (\theta ^*_i-\theta ^*_1) \cdots (\theta ^*_i-\theta ^*_{i-1})} \\&\quad = b^*_i \frac{ (\theta _{i+1}-\theta _0) (\theta _{i+1}-\theta _1) \cdots (\theta _{i+1}-\theta _i)}{(\theta _i-\theta _0) (\theta _i-\theta _1) \cdots (\theta _i-\theta _{i-1})}. \end{aligned}$$

Proof

Each side is equal to \(\varphi _{i+1}\) by Lemma 19.4(i),(iii). \(\square \)

Lemma 19.15

For \(d\ge 1\),

$$\begin{aligned} b_0(\theta ^*_1-\theta ^*_0)&= b^*_0(\theta _1-\theta _0), \\ \frac{b_i (\theta ^*_{i+1}-\theta ^*_i) (\theta ^*_{i+1}-\theta ^*_{i-1}) }{ \theta ^*_i-\theta ^*_0}&= \frac{b^*_i (\theta _{i+1}-\theta _i) (\theta _{i+1}-\theta _{i-1}) }{ \theta _i-\theta _0} \qquad (1 \le i \le d-1). \end{aligned}$$

Proof

Compare the formulas for \(b_i\), \(b^*_i\) in Proposition 19.8. \(\square \)

Lemma 19.16

For \(0 \le i \le d-1\),

$$\begin{aligned}&c_{i+1} \frac{ (\theta ^*_{i}-\theta ^*_d) (\theta ^*_{i}-\theta ^*_{d-1}) \cdots (\theta ^*_{i}-\theta ^*_{i+1})}{(\theta ^*_{i+1}-\theta ^*_d) (\theta ^*_{i+1}-\theta ^*_{d-1}) \cdots (\theta ^*_{i+1}-\theta ^*_{i+2})} \\&\quad = c^*_{d-i} \frac{ (\theta _{d-i-1}-\theta _d) (\theta _{d-i-1}-\theta _{d-1}) \cdots (\theta _{d-i-1}-\theta _{d-i})}{(\theta _{d-i}-\theta _d) (\theta _{d-i}-\theta _{d-1}) \cdots (\theta _{d-i}-\theta _{d-i+1})}. \end{aligned}$$

Proof

Each side is equal to \(\phi _{i+1}\) by Lemma 19.4(ii),(iv). \(\square \)

Lemma 19.17

For \(0 \le i \le d-1\),

$$\begin{aligned} \sum _{h=0}^i \frac{\theta _h - a_h}{\theta _i - \theta _{i+1}}&= \sum _{h=0}^i \frac{\theta ^*_h - a^*_h}{\theta ^*_i - \theta ^*_{i+1}}, \\ \sum _{h=0}^i \frac{\theta _{d-h} - a_h}{\theta _{d-i} - \theta _{d-i-1}}&= \sum _{h=0}^i \frac{\theta ^*_h - a^*_{d-h}}{\theta ^*_i - \theta ^*_{i+1}}, \\ \sum _{h=0}^i \frac{\theta _h - a_{d-h}}{\theta _i - \theta _{i+1}}&= \sum _{h=0}^i \frac{\theta ^*_{d-h} - a^*_h}{\theta ^*_{d-i} - \theta ^*_{d-i-1}}, \\ \sum _{h=0}^i \frac{\theta _{d-h} - a_{d-h}}{\theta _{d-i} - \theta _{d-i-1}}&= \sum _{h=0}^i \frac{\theta ^*_{d-h} - a^*_{d-h}}{\theta ^*_{d-i} - \theta ^*_{d-i-1}}. \end{aligned}$$

Proof

The first equation holds, because each side is equal to \(-\varphi _{i+1}(\theta _{i+1}-\theta _i)^{-1}(\theta ^*_{i+1}-\theta ^*_i)^{-1}\) by Lemma 8.2. The remaining equations are similarly obtained. \(\square \)

We emphasize a special case of Lemma 19.17.

Lemma 19.18

For \(d\ge 1\),

$$\begin{aligned}&\frac{\theta _0-a_0}{\theta _0-\theta _1}= \frac{\theta ^*_0-a^*_0}{\theta ^*_0-\theta ^*_1}, \qquad \qquad \frac{\theta _d-a_0}{\theta _{d}-\theta _{d-1}}= \frac{\theta ^*_0-a^*_d}{\theta ^*_0-\theta ^*_1}, \\&\frac{\theta _0-a_d}{\theta _0-\theta _1}= \frac{\theta ^*_d-a^*_0}{\theta ^*_{d}-\theta ^*_{d-1}}, \qquad \qquad \frac{\theta _d-a_d}{\theta _{d}-\theta _{d-1}}= \frac{\theta ^*_d-a^*_d}{\theta ^*_{d}-\theta ^*_{d-1}}. \end{aligned}$$

Proof

Set \(i=0\) in Lemma 19.17. \(\square \)

Motivated by Lemma 10.3, our next goal is to explicitly write the intersection numbers and dual intersection numbers in terms of \(\varphi _1\) and \(\lbrace \theta _i \rbrace _{i=0}^{d}\), \(\lbrace \theta ^*_i \rbrace _{i=0}^{d}\). To avoid trivialities we assume that \(d\ge 2\). We will use the following result, which appears in [4, Lemma 15.16] in the context of distance-regular graphs.

Lemma 19.19

For \(d\ge 2\), \(\varphi _2\) is equal to each of

$$\begin{aligned}&\varphi _1 \frac{\theta _0 + \theta _1-\theta _{d-1}-\theta _d}{\theta _0-\theta _d} + (\theta ^*_0-\theta ^*_1)(\theta _0+\theta _{1}-\theta _{d-1} - \theta _d) +(\theta ^*_2-\theta ^*_0)(\theta _1-\theta _d), \end{aligned}$$
(84)
$$\begin{aligned}&\varphi _1 \frac{\theta ^*_0 + \theta ^*_1-\theta ^*_{d-1}-\theta ^*_d}{\theta ^*_0-\theta ^*_d} + (\theta _0-\theta _1)(\theta ^*_0+\theta ^*_{1}-\theta ^*_{d-1} - \theta ^*_d) +(\theta _2-\theta _0)(\theta ^*_1-\theta ^*_d). \end{aligned}$$
(85)

Proof

To get (84), evaluate (PA3) at \(i=2\) and simplify the result using (PA4) at \(i=1\). To get (85) from (84), note that each of \(\varphi _1, \varphi _2\) is unchanged if we replace \(\Phi \) by \(\Phi ^*\). \(\square \)

The following result appears in [3, Theorem 8.1.1] and [4, Theorem 15.18], in the context of distance-regular graphs.

Proposition 19.20

For \(d\ge 2\),

$$\begin{aligned} b_0&= \frac{\varphi _1}{\theta ^*_1-\theta ^*_0}, \\ b_i&= \frac{(\theta ^*_0 - \theta ^*_i)(\varphi _1 f^-_i + g^-_i)}{ (\theta ^*_{i+1}-\theta ^*_i)(\theta ^*_{i+1}-\theta ^*_{i-1})} \qquad (1 \le i \le d-1), \\ c_i&= \frac{(\theta ^*_0 - \theta ^*_i)(\varphi _1 f^+_i + g^+_i)}{ (\theta ^*_{i-1}-\theta ^*_i)(\theta ^*_{i-1}-\theta ^*_{i+1})} \qquad (1 \le i \le d-1), \\ c_d&= \frac{\varphi _1+ (\theta _0 - \theta _1) (\theta ^*_{0}-\theta ^*_d)}{\theta ^*_{d-1}-\theta ^*_d} \end{aligned}$$

where

$$\begin{aligned} f^{\pm }_i&= \frac{\theta ^*_1-\theta ^*_{i\pm 1}}{\theta ^*_0-\theta ^*_i} - \frac{\theta ^*_1-\theta ^*_{d-1}}{\theta ^*_0-\theta ^*_d}, \\ g^{\pm }_i&= (\theta _1-\theta _2)(\theta ^*_i-\theta ^*_d)- (\theta _0-\theta _1)(\theta ^*_{i\pm 1}-\theta ^*_{d-1}). \end{aligned}$$

To get \(\lbrace b^*_i\rbrace _{i=0}^{d-1}\) and \(\lbrace c^*_i\rbrace _{i=1}^d\), exchange the symbols \(\theta \), \(\theta ^*\) everywhere in the above equations.

Proof

To get \(b_0\), set \(i=0\) in Lemma 19.4(i). To get \(c_d\), set \(i=d\) in Lemma 19.4(ii) and eliminate \(\phi _d\) using (PA4). We now compute \(b_i, c_i\) for \(1 \le i \le d-1\). Let i be given. For the equations (74) at \(j=1\) and \(j=2\), eliminate \(a_i\) using (69) to obtain a system of two linear equations in the unknowns \(b_i\), \(c_i\). Using linear algebra we routinely solve this system for \(b_i\), \(c_i\), and in the solution eliminate \(\varphi _2\) using (85). We have obtained \(\lbrace b_i\rbrace _{i=0}^{d-1}\) and \(\lbrace c_i\rbrace _{i=1}^d\). To obtain \(\lbrace b^*_i\rbrace _{i=0}^{d-1}\) and \(\lbrace c^*_i\rbrace _{i=1}^d\), apply the above arguments to \(\Phi ^*\). \(\square \)

Our next goal is to give a variation on Proposition 19.20 that involves \(a_0\), \(a^*_0\) and \(c_1\), \(c^*_1\).

Lemma 19.21

For \(d\ge 2\), \(\varphi _2\) is equal to each of

$$\begin{aligned} (c_1-a_0+ \theta _1)(\theta ^*_2-\theta ^*_0), \qquad \qquad (c^*_1-a^*_0+ \theta ^*_1)(\theta _2-\theta _0). \end{aligned}$$
(86)

Proof

Set \(i=1\) in (69), Lemma 19.4(i), (80). The resulting equations show that \(\varphi _2\) is equal to the expression on the left in (86). Note that \(\varphi _2\) is unchanged if we replace \(\Phi \) by \(\Phi ^*\). Therefore \(\varphi _2\) is equal to the expression on the right in (86). \(\square \)

We clarify how \(c_1, a_0\) are related and how \(c^*_1, a^*_0\) are related.

Lemma 19.22

For \(d\ge 2\) we have

$$\begin{aligned} c_1&= (\theta _d-a_0)\frac{ (\theta ^*_0-\theta ^*_1)(\theta _1-\theta _{d-1}) - (\theta ^*_1-\theta ^*_2)(\theta _0-\theta _d) }{(\theta ^*_0-\theta ^*_2)(\theta _0-\theta _d)}, \\ c^*_1&= (\theta ^*_d-a^*_0)\frac{ (\theta _0-\theta _1)(\theta ^*_1-\theta ^*_{d-1}) - (\theta _1-\theta _2)(\theta ^*_0-\theta ^*_d) }{(\theta _0-\theta _2)(\theta ^*_0-\theta ^*_d)}. \end{aligned}$$

Proof

To verify the first equation, express the left-hand side in terms of \(\varphi _2\) using Lemma 19.21, and in the resulting equation express \(a_0\) in terms of \(\varphi _1\) using Lemma 8.1. Evaluate the result using (84). We have verified the first equation. To get the second equation, apply the first equation to \(\Phi ^*\). \(\square \)

Proposition 19.23

For \(d\ge 2\),

$$\begin{aligned} b_0&= \frac{(\theta ^*_0-a^*_0)(\theta _1-\theta _0)}{\theta ^*_1-\theta ^*_0}, \\ b_i&=\frac{ c^*_1(\theta ^*_0-\theta ^*_i)(\theta _0-\theta _2)+ (\theta ^*_i-a^*_0) \bigl ( (\theta _1-\theta _2)(\theta ^*_0-\theta ^*_i)- (\theta _0-\theta _1)(\theta ^*_1-\theta ^*_{i-1}) \bigr ) }{ (\theta ^*_{i+1}-\theta ^*_i)(\theta ^*_{i+1}-\theta ^*_{i-1})}, \\ c_i&=\frac{ c^*_1(\theta ^*_0-\theta ^*_i)(\theta _0-\theta _2)+ (\theta ^*_i-a^*_0) \bigl ( (\theta _1-\theta _2)(\theta ^*_0-\theta ^*_i)- (\theta _0-\theta _1)(\theta ^*_1-\theta ^*_{i+1}) \bigr ) }{ (\theta ^*_{i-1}-\theta ^*_i)(\theta ^*_{i-1}-\theta ^*_{i+1})}, \\ c_d&=\frac{(\theta ^*_d-a^*_0)(\theta _1-\theta _0)}{\theta ^*_{d-1}-\theta ^*_{d}}. \end{aligned}$$

To get \(\lbrace b^*_i\rbrace _{i=0}^{d-1}\) and \(\lbrace c^*_i\rbrace _{i=1}^d\), exchange starred and nonstarred symbols everywhere in the above equations.

Proof

To get \(b_0\), set \(i=0\) in Lemma 19.4(i) and evaluate the result using \(\varphi _1=(\theta ^*_0-a^*_0)(\theta _1-\theta _0)\). To get \(c_d\), set \(i=d\) in Lemma 19.4(ii) and evaluate the result using \(\phi _d=(\theta ^*_d-a^*_0)(\theta _1-\theta _0)\). To obtain \(b_i, c_i\) for \(1 \le i \le d-1\), we proceed as in the proof of Proposition 19.20. Let i be given. For the equations (74) at \(j=1\) and \(j=2\), eliminate \(a_i\) using (69) to obtain a system of two linear equations in the unknowns \(b_i\), \(c_i\). Using linear algebra we routinely solve this system for \(b_i\), \(c_i\), and in the solution eliminate \(\varphi _1\), \(\varphi _2\) using \(\varphi _1=(\theta ^*_0-a^*_0)(\theta _1-\theta _0)\) and \(\varphi _2=(c^*_1-a^*_0+\theta ^*_1)(\theta _2-\theta _0)\). We have obtained \(\lbrace b_i\rbrace _{i=0}^{d-1}\) and \(\lbrace c_i\rbrace _{i=1}^d\). To obtain \(\lbrace b^*_i\rbrace _{i=0}^{d-1}\) and \(\lbrace c^*_i\rbrace _{i=1}^d\), apply the above arguments to \(\Phi ^*\). \(\square \)

Note 19.24

Lemma 19.22 and Proposition 19.23 effectively give the intersection numbers (resp. dual intersection numbers) in terms of \(a^*_0\) (resp. \(a_0\)) and \(\lbrace \theta _i \rbrace _{i=0}^d\), \(\lbrace \theta ^*_i \rbrace _{i=0}^d\). The scalars \(a_0\) and \(a^*_0\) are related by the first equation in Lemma 19.18.

Note 19.25

There is a Leonard system attached to the Bose-Mesner algebra of a Q-polynomial distance-regular graph; see [2, p. 260], [12]. For this Leonard system \(a_0=0\), \(a^*_0=0\) and \(c_1=1\), \(c^*_1=1\).