Notes on the Leonard system classification

Around 2001 we classified the Leonard systems up to isomorphism. The proof was lengthy and involved considerable computation. In this paper we give a proof that is shorter and involves minimal computation. We also give a comprehensive descriptionof the intersection numbers of a Leonard system.


Introduction
In the area of Algebraic Combinatorics there is an object called a commutative association scheme [2], [13]. This is a combinatorial generalization of a finite group, that retains enough of the group structure so that one can still speak of the character table. During the decade 1970-1980 it was realized by E. Bannai, P. Delsarte, D. Higman and others that commutative association schemes help to unify many aspects of group theory, coding theory, and design theory. An early work in this direction was the 1973 thesis of Delsarte [5]. This thesis helped to motivate the work of Bannai [2, p. i], who taught a series of graduate courses on commutative association schemes during September 1978December 1982 at the Ohio State University. The lecture notes from those courses, along with more recent developments, became a book coauthored with T. Ito [2]. The book had a large impact; it is currently cited 698 times according to MathSciNet.
In the Introduction to [2], Bannai and Ito describe the goals of their book. One goal was to summarize what is known about commutative association schemes up to that time. Another goal was to focus the reader's attention on two remarkable types of schemes, said to be P -polynomial and Q-polynomial. A P -polynomial scheme is essentially the same thing as a distance-regular graph, and can be viewed as a finite analog of a 2-point homogeneous space [28]. Similarly, a Q-polynomial scheme is a finite analog of a rank 1 symmetric space [28]. By a theorem of H. C. Wang [28], a compact Riemannian manifold is 2-point homogeneous if and only if it is rank 1 symmetric. This result was extended to the noncompact case by J. Tits [26] and S. Helgason [8]. Motivated by all this, Bannai and Ito conjectured that a primitive association scheme is P -polynomial if and only if it is Q-polynomial, provided that the diameter is sufficiently large [2, p. 312]. They also proposed the classification of schemes that are both P -polynomial and Q-polynomial [2, p. xiii].
Progress on the proposed classification was made while the book was still in preparation. A P -polynomial scheme gets its name from the fact that there exists a sequence of orthogonal polynomials {u i } d i=0 such that u i (A) = A i /k i for 0 ≤ i ≤ d, where d is the diameter of the scheme, A i is the ith associate matrix, A = A 1 , and k i is the ith valency [2, pp. 190, 261]. Similarly, for a Q-polynomial scheme there exists a sequence of orthogonal polynomials where A * i is the ith dual associate matrix, A * = A * 1 , and k * i is the ith dual valency [2, pp. 193, 261], [15, p. 384]. For schemes that are P -polynomial and Q-polynomial, we have ) are the eigenvalues of A (resp. A * ) [2, p. 262]. These equations are known as Delsarte duality [12] or Askey-Wilson duality [22, p. 261]. This duality can be defined for d = ∞, but throughout this paper we assume that d is finite.
Askey-Wilson duality comes up naturally in the area of special functions and orthogonal polynomials. In this area the classical orthogonal polynomials are often described using a partially-ordered set called the Askey-tableau. The vertices in the poset represent the various families of classical orthogonal polynomials, and the covering relation describes what happens when a limit is taken. See [11] for an early version of the tableau, and [10, pp. 183, 413] for a more recent version. One branch of the tableau, sometimes called the terminating branch, contains the polynomials that are orthogonal with respect to a measure that is nonzero at finitely many arguments. At the top of this terminating branch sit the q-Racah polynomials, introduced in 1979 by R. Askey and J. Wilson [1]. The rest of the terminating branch consists of the q-Hahn, dual q-Hahn, q-Krawtchouk, dual q-Krawtchouk, quantum q-Krawtchouk, affine q-Krawtchouk, Racah, Hahn, dual-Hahn, and Krawtchouk polynomials. The abovenamed polynomials are defined using hypergeometric series or basic hypergeometric series, and it is transparent from the definition that they satisfy Askey-Wilson duality.
Back at Ohio State, there was a graduate student attending Bannai's classes by the name of Douglas Leonard. With Askey's encouragement, Leonard showed in [12] that the q-Racah polynomials give the most general orthogonal polynomial system that satisfies Askey-Wilson duality, under the assumption that d ≥ 9. In [2, Theorem 5.1] Bannai and Ito give a version of Leonard's theorem that removes the assumption on d and explicitly describes all the limiting cases that show up. This version gives a complete classification of the orthogonal polynomial systems that satisfy Askey-Wilson duality. It shows that the orthogonal polynomial systems that satisfy Askey-Wilson duality are from the terminating branch of the Askey-tableau, except for one family with q = −1 now called the Bannai-Ito polynomials [2, p. 271], [23,Example 5.14]. In our view, the terminating branch of the Askey-tableau should include the Bannai-Ito polynomials. Adopting this view, for the rest of this paper we include the Bannai-Ito polynomials in the terminating branch of the Askey-tableau.
The Leonard theorem [2, Theorem 5.1] is notoriously complicated; the statement alone takes 11 pages. In an effort to simplify and clarify the theorem, the present author introduced the notion of a Leonard pair [19, Definition 1.1] and Leonard system [19,Definition 1.4]. Roughly speaking, a Leonard pair consists of two diagonalizable linear transformations on a finitedimensional vector space, each acting on the eigenspaces of the other one in an irreducible tridiagonal fashion; see Definition 2.1 below. A Leonard system is essentially a Leonard pair, together with appropriate orderings of their eigenspaces; see Definition 2.3 below. In [19,Theorem 1.9] the Leonard systems are classified up to isomorphism. This classification is related to Leonard's theorem as follows. In [19,Appendix A] and [22,Section 19], a bijection is given between the isomorphism classes of Leonard systems over R, and the orthogonal polynomial systems that satisfy Askey-Wilson duality. Given the bijection, the classification [19,Theorem 1.9] becomes a 'linear-algebraic version' of Leonard's theorem. This version is conceptually simple and quite elegant in our view. In [25] we start with the Leonard pair axiom and derive, in a uniform and attractive manner, the polynomials in the terminating branch of the Askey-tableau, along with their properties such as the 3-term recurrence, difference equation, Askey-Wilson duality, and orthogonality.
We comment on how the theory of Leonard pairs and Leonard systems depends on the choice of ground field. The classification [19,Theorem 1.9] shows that the ground field does not matter in a substantial way, unless it has characteristic 2. In this case, the theory admits an additional family of polynomials called the orphans. The orphans have diameter d = 3 only; they are described in [23,Example 5.15] and Example 20.13 below.
The book [2] appeared in 1984 and the paper [19] appeared in 2001. It is natural to ask what happened in between. The concept of a Leonard pair over R appears in [14, Definitions 1.1, 1.2], where it is called a thin Leonard pair. Also appearing in [14] is the correspondence between Leonard pairs over R and the orthogonal polynomial systems that satisfy Askey-Wilson duality. In addition [14,Definition 3.1] describes an algebra called the Leonard algebra, now known as the Askey-Wilson algebra. The paper [14] was submitted but never published. In [29] A. Zhedanov introduced the Askey-Wilson algebra. This algebra has a presentation involving two generators K 1 , K 2 that satisfy a pair of quadratic relations. In [6], Granovskii, Lutzenko, and Zhedanov consider a finite-dimensional irreducible module for the Askey-Wilson algebra, on which each of K 1 , K 2 are diagonalizable. Under some minor assumptions, they show that each of K 1 , K 2 acts in an irreducible tridiagonal fashion on an eigenbasis for the other one. In hindsight, it is fair to say that they constructed an example of a Leonard pair, although they did not define a Leonard pair as an independent concept. The paper [ Turning to the present paper, we obtain two main results: (i) an improved proof for the classification of Leonard systems; (ii) a comprehensive description of the intersection numbers of a Leonard system. We now describe our main results in detail. In [19,Theorem 1.9] the Leonard systems are classified up to isomorphism, and the given proof is completely correct as far as we know. However the proof is longer than necessary. In the roughly two decades since the paper was published, we have discovered some 'shortcuts' that simplify the proof and avoid certain tedious calculations. The shortcuts are summarized as follows.
• In [19, Section 3] we established the split canonical form for a Leonard system. In the present paper we make use of the fact that the split canonical form still exists under weaker assumptions; these are described in Proposition 7.6 below.
• The concept of a normalizing idempotent was introduced by Edward Hanson in [7,Section 6]. In the present paper we use this concept to simplify numerous arguments; see Sections 6, 7, 17 below.
• In [19,Theorem 4.8] we explicitly gave the matrix entries for a certain matrix representation of the primitive idempotents for a Leonard system. The computation of these entries is tedious and takes up most of [19,Section 4]. In the present paper we replace all of this by a single identity (15) that is established in a few lines.
• In the present paper we use an antiautomorphism † to obtain the result Proposition 4.4, which is roughly summarized as follows: as we construct a Leonard system, if we construct three-fourths of it then the last fourth comes for free.
• In [19, Lemma 7.2] we used a slightly obscure method to establish the irreducibility of the underlying module for a Leonard system. In the present paper this lemma is avoided using the first bullet point above. • Using the improvements listed above, we replace the arguments in [19,Sections 13,14] with more efficient arguments in Section 17 below.
Some parts of the improved proof are unchanged from the original; we still use [19,Sections 8,9] and [19,Lemmas 10.2,12.4]. These results are reproduced in the present paper in order to obtain a complete proof, all in one place. We believe that this complete proof is suitable for The Book if not this journal.
Concerning our second main result, we mentioned earlier that the Leonard systems correspond to the orthogonal polynomial sequences that satisfy Askey-Wilson duality. Unfortunately, it is a bit difficult to go back and forth between the two points of view, because from the polynomial perspective, the main parameters are the intersection numbers (or connection coefficients) that describe the 3-term recurrence, and from the Leonard system perspective, the main parameters are the first and second split sequence that make up part of the parameter array. There are some equations that relate the two types of parameters; see [22,Theorem 17.7] and Lemma 19.4 below. However the nonlinear nature of these equations makes them difficult to use. In order to mitigate the difficulty, we display many identities that involve the intersection numbers along with the first and second split sequence. Taken together, these identities should make it easier to work with the intersection numbers in the future. These identities can be found in Section 19. We also explicitly give the intersection numbers for every isomorphism class of Leonard system; these are contained in the Appendix.
The paper is organized as follows. Sections 2, 3 contain preliminary comments and definitions. In Section 4 we describe the antiautomorphism † and use it to obtain Proposition 4.4. In Section 5 we describe some polynomials that will be used throughout the paper. In Section 6 we discuss the concept of a normalizing idempotent. In Section 7 we use normalizing idempotents to describe certain kinds of decompositions relevant to Leonard systems. Section 8 contains the wrap-around result. In Section 9 we recall the parameter array of a Leonard system. In Section 10 we state the Leonard system classification, which is Theorem 10.1. Sections 11-13 are about recurrent sequences. In Section 14 we define a polynomial in two variables that will be useful on several occasions later in the paper. Sections 15,16 are about the tridiagonal relations. In Section 17 we complete the proof of Theorem 10.1. Section 18 contains two characterizations related to Leonard systems and parameter arrays. In Section 19 we give a comprehensive treatment of the intersection numbers of a Leonard system. These intersection numbers are listed in the Appendix.

Preliminaries
We now begin our formal argument. Shortly we will define a Leonard pair and Leonard system. Before we get into detail, we briefly review some notation and basic concepts. Let F denote a field. Every vector space and algebra discussed in this paper is understood to be over F. Throughout the paper fix an integer d ≥ 0. Let Mat d+1 (F) denote the algebra consisting of the d + 1 by d + 1 matrices that have all entries in F. We index the rows and columns by 0, 1, . . . , d. Throughout the paper V denotes a vector space with dimension d + 1. Let End(V ) denote the algebra consisting of the F-linear maps from V to V . Next we recall how each basis is called tridiagonal whenever each nonzero entry lies on either the diagonal, the subdiagonal, or the superdiagonal. Assume that M is tridiagonal. Then M is called irreducible whenever each entry on the subdiagonal is nonzero, and each entry on the superdiagonal is nonzero. (i) there exists a basis for V with respect to which the matrix representing A is diagonal and the matrix representing A * is irreducible tridiagonal; (ii) there exists a basis for V with respect to which the matrix representing A * is diagonal and the matrix representing A is irreducible tridiagonal.
The Leonard pair A, A * is said to be over F and have diameter d.
Note 2.2. According to a common notational convention, A * denotes the conjugate-transpose of A. We are not using this convention. In a Leonard pair A, A * the linear transformations A and A * are arbitrary subject to (i), (ii) above.
When working with a Leonard pair, it is convenient to consider a closely related object called a Leonard system. Before defining a Leonard system, we recall a few concepts from linear algebra. An element A ∈ End(V ) is said to be diagonalizable whenever V is spanned by the eigenspaces of A. The element A is called multiplicity-free whenever A is diagonalizable and each eigenspace of A has dimension one. Note that A is multiplicity-free if and only if A has d + 1 mutually distinct eigenvalues in F. Assume that A is multiplicity-free, and let denote an ordering of the eigenspaces of A.
where tr means trace. Moreover Let D denote the subalgebra of End(V ) generated by A.
of elements in End(V ) that satisfy (i)-(v) below: (i) each of A, A * is multiplicity-free; is an ordering of the primitive idempotents of A; is an ordering of the primitive idempotents of A * ; The Leonard system Φ is said to be over F and have diameter d.
Leonard pairs and Leonard systems are related as follows. Let (A; Next we recall the notion of isomorphism for Leonard pairs and Leonard systems. Definition 2.4. Let A, A * denote a Leonard pair on V , and let B, B * denote a Leonard pair on a vector space V ′ . By an isomorphism of Leonard pairs from A, A * to B, B * we mean a vector space isomorphism σ : V → V ′ such that B = σAσ −1 and B * = σA * σ −1 . The Leonard pairs A, A * and B, B * are isomorphic whenever there exists an isomorphism of Leonard pairs from A, A * to B, B * . Let σ : V → V ′ denote an isomorphism of vector spaces. For X ∈ End(V ) abbreviate X σ = σXσ −1 and note that X σ ∈ End(V ′ ). The map End(V ) → End(V ′ ), X → X σ is an isomorphism of algebras. For a Leonard system is a Leonard system on V ′ .
Definition 2.5. Let Φ denote a Leonard system on V , and let Φ ′ denote a Leonard system on a vector space V ′ . By an isomorphism of Leonard systems from Φ to Φ ′ , we mean an isomorphism of vector spaces σ : V → V ′ such that Φ σ = Φ ′ . The Leonard systems Φ and Φ ′ are isomorphic whenever there exists an isomorphism of Leonard systems from Φ to Φ ′ .
In [19,Theorem 1.9] we classified the Leonard systems up to isomorphism. Our first main goal in the present paper is to give an improved proof of this classifiction. This goal will be accomplished in Sections 3-17. The statement of the classification is given in Theorem 10.1. The proof of Theorem 10.1 will be completed in Section 17.

Pre Leonard systems
As we start our investigation of Leonard systems, it is helpful to consider a more general object called a pre Leonard system. This object is defined as follows.
Definition 3.1. By a pre Leonard system on V , we mean a sequence of elements in End(V ) that satisfy conditions (i)-(iii) in Definition 2.3.
The results in this section refer to the pre Leonard system Φ from (3).
) the eigenvalue sequence (resp. dual eigenvalue sequence) of Φ. Let D (resp. D * ) denote the subalgebra of End(V ) generated by A (resp. A * ). We Lemma 3.4. We have Proof. To obtain (4), observe that The proof of (5) is similar.
In this equation take the trace of each side and use Definition 3.3 along with tr(XY ) = tr(Y X) to obtain a i = α i . (ii) Similar to the proof of (i) above.
We have been discussing the pre Leonard system on V . Each of the following is a pre Leonard system on V : Proposition 3.6. With the above notation, Proof. Use Definitions 3.2, 3.3.

The antiautomorphism †
We continue to discuss the pre Leonard system Φ = (A; Then the elements form a basis for the vector space End(V ).
is a basis for V . Without loss of generality, we may identify each X ∈ End(V ) with the matrix in Mat d+1 (F) that represents X with respect to {v i } d i=0 . From this point of view A is irreducible tridiagonal and E * 0 = diag(1, 0, . . . , 0). Using these matrices one routinely checks that the elements (6) are linearly independent. There are (d + 1) 2 elements listed in (6), and this is the dimension of End(V ). Therefore the elements (6) form a basis for End(V ). (ii) By (i) above and since E * 0 is a polynomial in A * . By an automorphism of End(V ) we mean an algebra isomorphism End(V ) → End(V ). By an antiautomorphism of End(V ) we mean a vector space isomorphism ζ : End(V ) → End(V ) such that (XY ) ζ = Y ζ X ζ for all X, Y ∈ End(V ). (ii) † fixes everything in D and everything in D * ; The matrix K is invertible and The map ♭ is an antiautomorphism of Mat d+1 (F) that fixes each of B, B * . The composition † : is an antiautomorphism of End(V ) that fixes each of A, A * . We have shown that † exists. Next we show that † is unique. Let ζ denote an antiautomorphism of End(V ) that fixes each of A, A * . Then the composition Assume at least three of (i)-(iv) hold. Then each of (i)-(iv) holds; in other words the pre Leonard system Φ is a Leonard system.
Proof. Interchanging A, A * if necessary, we may assume without loss of generality that (i), (ii) hold. Now the assumption of Lemma 4.1 holds, so Lemma 4.3 applies. Consider the map We continue to discuss the pre Leonard Let λ denote an indeterminate. Let F[λ] denote the algebra consisting of the polynomials in λ that have all coefficients in F.
We mention some results about form a basis for D.
But this follows from Lemma 5.3(i).

Normalizing idempotents
We continue to discuss the pre Leonard Next we explain what it means for E * 0 to be normalizing. This concept was introduced in [7, Section 6], although our point of view is different.
In the next two lemmas we give some necessary and sufficient conditions for E * 0 to be normalizing. The proofs are routine, and omitted.
Lemma 6.2. The following (i)-(iv) are equivalent: The following (i)-(iv) are equivalent: Proposition 6.4. Assume that E * 0 is normalizing. Then for X ∈ End(V ) and 0 ≤ i ≤ d the following are equivalent: Proof. Using Lemma 6.3(ii),

Normalizing idempotents and decompositions
We continue to discuss the pre Leonard Lemma 7.4. Assume that E * 0 is normalizing, and define Then the following (i)-(v) hold: Proof. (i) By Lemma 6.3(iv) and since . (iv) By Lemma 5.4(i) and Lemma 6.3(iv). (iv) By Lemma 5.4(ii) and Lemma 6.3(iv).
In this inclusion, the left-hand side has dimension i+1 and the right-hand side has dimension at most i + 1. Therefore the inclusion holds with equality.
Also observe that Proposition 7.6. The following (i)-(iii) are equivalent: (iii) There exist scalars {ϕ i } d i=1 in F and a basis for V with respect to which A : Assume that (i)-(iii) hold. Then E * 0 is normalizing and This decomposition satisfies (10) by Lemma 7.4(ii),(iii). We now show (11). By Lemma 7.4(iv) and Lemma 7.5(iii), For 0 ≤ i ≤ d, Also, using Lemma 7.4(v) and (9), The above comments imply (11).
Next we show (9). Let i, j be given with j − i > 1. Using Lemma 7.4(v) and (11), is a reformulation of (ii) in terms of matrices. Now assume that (i)-(iii) hold. We mentioned in the proof of (ii) ⇒ (i) that E * 0 is normalizing and denote the basis for V from (iii), and define ξ = u 0 . From the matrix representing A * in (12), we see that ξ is an eigenvector (12), we obtain is unique since the vector ξ is unique up to multiplication by a nonzero scalar.
Lemma 7.7. Assume that the equivalent conditions (i)-(iii) hold in Proposition 7.6. Then for 1 ≤ i ≤ d the following (i)-(iii) are equivalent:

A result about wrap-around
We continue to discuss the pre Leonard Throughout this section we assume that Φ satisfies the equivalent conditions (i)-(iii) in Proposition 7.6. We will obtain a useful result involving the scalars {ϕ i } d i=1 from Proposition 7.6(iii); this result is sometimes called the wrap-around result.

Recall the parameters {a
Let r be given. We compute the trace of AA * r in two ways. On one hand, By this and Definition 3.3, On the other hand, consider the matrix representations of A and A * from (12). Using these matrices we compute the trace of AA * r as the sum of the diagonal entries. A brief calculation yields Comparing (14), (15) we get an equation that becomes 0 = d i=0 (x i −a i )θ * r i after rearranging the terms. We have shown that a i = x i for 0 ≤ i ≤ d. Our assertions concerning {a * i } d i=0 are similarly obtained, by computing in two ways the trace of A * A r for 0 ≤ r ≤ d.
. For 1 ≤ i ≤ d the scalar ϕ i is equal to each of the following four expressions: Proof. Use Lemmas 3.4, 8.1.
Proof. In the equation I = d i=0 E * i , multiply each term on the right by AE * 0 . Simplify the result using In (16), multiply each term on the left by A * and simplify to get In the equation In (18), multiply each term on the right by A and simplify to obtain We now compute θ * 1 E d times (16) minus E d times (17) minus (18) In the above equation, consider the coefficient of E d E * 0 . Evaluate this coefficient using For the sake of completeness, we mention a second version of Proposition 8.4. We do not use this second version, so we will not dwell on the proof.
Proof. Similar to the proof of Proposition 8.4.

The parameter array of a Leonard system
In this section we consider a Leonard system Φ = (A; Note that Φ satisfies the equivalent conditions (i)-(iii) in Proposition 7.6. Definition 9.1. By the first split sequence for Φ we mean the sequence the second split sequence for Φ. By Lemma 7.7 and the construction, ϕ i and φ i are nonzero for 1 ≤ i ≤ d.
In Lemma 8.1 we gave some formulas for Lemma 9.4. For d ≥ 1 we have is the first split sequence for Φ ⇓ . Apply Lemma 8.1 to Φ ⇓ and use the data for Φ ⇓ in Proposition 3.6.
Proof. Apply Lemma 8.2 to Φ ⇓ and use the data for Φ ⇓ in Proposition 3.6.
We mention a variation on Lemma 9.3. Proof. By Lemma 9.3 and Definition 9.6.

Statement of the Leonard system classification
In the following theorem we classify up to isomorphism the Leonard systems over F.
of scalars in F. Then there exists a Leonard system Φ over F with parameter array (21) if and only if the following conditions (PA1)-(PA5) hold: are equal and independent of i for 2 ≤ i ≤ d − 1.
Moreover, if Φ exists then Φ is unique up to isomorphism of Leonard systems.
The proof of Theorem 10.1 will be completed in Section 17.
Definition 10.2. By a parameter array of diameter d over F, we mean a sequence (21) of scalars in F that satisfy (PA1)-(PA5).
Theorem 10.1 gives a bijection between the following two sets: (i) the parameter arrays over F that have diameter d; (ii) the isomorphism classes of Leonard systems over F that have diameter d.
We have a comment.

Recurrent sequences
Throughout this section let {θ i } d i=0 denote scalars in F.
is independent of i for 2 ≤ i ≤ d − 1.
(ii) The sequence {θ i } d i=0 is said to be β-recurrent whenever is zero for 2 ≤ i ≤ d − 1.
Proof. Routine. Lemma 11.3. For β ∈ F the following are equivalent: (24) is zero by assumption, so The left-hand side of (25) is independent of i, and the result follows. Lemma 11.4. The following (i), (ii) hold for all β, γ ∈ F.
Proof. Let p i denote the expression on the left in (26), and observe Assertions (i), (ii) are both routine consequences of this.

Recurrent sequences in closed form
In this section, we obtain some formula involving recurrent sequences. Let F denote the algebraic closure of F. For q ∈ F let F[q] denote the field extension of F generated by q.
Throughout this section let β and {θ i } d i=0 denote scalars in F.
Proof. (i) We assume d ≥ 2; otherwise the result is trivial. Let q be given, and consider the equations (27) for i = 0, 1, 2. These equations are linear in α 1 , α 2 , α 3 . We routinely find the coefficient matrix is nonsingular, so there exist α 1 , α 2 , α 3 in F[q] such that (27) holds for i = 0, 1, 2. Using these scalars, let ε i denote the left-hand side of (27) minus the right-hand side of (27), for 0 ≤ i ≤ d. On one hand ε 0 , ε 1 , ε 2 are zero from the construction. On the other hand, one readily checks By these comments ε i = 0 for 0 ≤ i ≤ d, and the result follows.
(ii), (iii) Similar to the proof of (i) above.
(ii) Suppose d ≥ p. Setting i = p in (28) and recalling that p is congruent to 0 modulo p, we obtain θ p = θ 0 , a contradiction. Hence d < p.
(iv) Suppose d ≥ 4. Setting i = 4 in (28), we find θ 4 = θ 0 in view of the comment at the end of Lemma 12.1. This is a contradiction, so d ≤ 3. . Assume that {θ i } d i=0 are mutually distinct and βrecurrent. Pick any integers i, j, r, s (0 ≤ i, j, r, s ≤ d) such that i + j = r + s, r = s. Then (i)-(iv) hold below.

A sum
Throughout this section assume d ≥ 1. Let β and We consider the sums where 0 ≤ i ≤ d + 1. Denoting the sum in (34) by ϑ i , we have Moreover The sums (34) play an important role a bit later, so we will examine them carefully. We begin by giving explicit formulas for the sums (34) under the assumption that {θ i } d i=0 is β-recurrent. To avoid trivialities we assume that d ≥ 3.
are mutually distinct and β-recurrent. Define Then the sequence {ϑ i } d+1 i=0 is β-recurrent. Proof. For d = 1 there is nothing to prove. For d = 2 we have For d ≥ 3 the result is obtained by examining the cases in Lemma 13.1.
Proposition 13.4. Assume that {θ i } d i=0 are mutually distinct and β-recurrent. Then for scalars {ϑ i } d+1 i=0 in F the following are equivalent: We show that ∆ i = 0 for 0 ≤ i ≤ d + 1. By construction For the rest of the proof we assume that d ≥ 3; otherwise we are done. By construction and Lemma 13.3, the sequence {∆ i } d+1 i=0 is β-recurrent. We break the argument into cases.
The first three equations in (44) give For the above equation, consider the determinant of the coefficient matrix. For even d = 2n this determinant is −2 2 n, and for odd d = 2n + 1 this determinant is 2 2 n. Note that 2 = 0 in F since char(F) = 2. For either parity of d we have n = 0 in F by (45). So for either parity of d the determinant is nonzero. Therefore the coefficient matrix is invertible, so each of α 1 , α 2 , α 3 is zero. Consequently ∆ i = 0 for 0 ≤ i ≤ d + 1. Case β = 0 and char(F) = 2. We have d = 3 by Lemma 12.2(iv). There exist α 1 , α 2 , α 3 in F such that where i(i − 1)/2 is interpreted at the end of Lemma 12.1. The first three equations in (44) give In the above equation the coefficient matrix is invertible, so each of α 1 , α 2 , α 3 is zero.
14 The polynomial P (x, y) Let β, γ, ̺ denote scalars in F, and consider a polynomial in two variables Note that P (x, y) = P (y, x). Let {θ i } d i=0 denote scalars in F.

The tridiagonal relations
In this section, we consider a Leonard system Our goal is to prove the following result.
Lemma 15.3. The following (i)-(iii) hold for 0 ≤ i, j ≤ d: (iii) for 0 ≤ r, s ≤ d, is a basis for V . Without loss of generality, we may identify each X ∈ End(V ) with the matrix in Mat d+1 (F) that represents X with respect to {v i } d i=0 . From this point of view, A is irreducible tridiagonal and A * = diag(θ * 0 , θ * 1 , . . . , θ * d ). Moreover for 0 ≤ i ≤ d, the matrix E * i is diagonal with (i, i)-entry 1 and all other entries 0. Using these matrix representations, one routinely verifies the assertions (i)-(iii) in the lemma statement.
Recall that D is the subalgebra of End(V ) generated by A. Then is a basis for the vector space D; (ii) L d = I; where Sum both (48) and (49) over j = 0, 1, . . . , i and take the difference between these two sums.
Lemma 15.5. We have Proof. Using Lemma 15.4 we obtain Proof of Theorem 15.1. First assume that d ≥ 3. By Lemma 15.5 (with X = A 2 and Y = A) there exists Z ∈ D such that Since We show that k = 3. We first assume that k < 3 and get a contradiction. We multiply each term in (50) on the left by E * 3 and on the right by E * 0 . We evaluate the result using We have shown k ≥ 3. Let c denote the coefficient of λ k in f . By construction c = 0.
Corollary 15.6. (See [19,Lemma 12.7]). For the Leonard system Φ the scalars are equal and independent of i for 2 ≤ i ≤ d − 1.
Then the entries of (54) are as follows.
Proof. Examine the entries given in Lemma 16.2.
Proposition 16.4. With the notation and assumptions of Lemma 16.3, Proof. By Lemma 16.3.

The proof of Theorem 10.1
In this section we prove Theorem 10.1.
We now reverse the direction. Assume that the scalars (21) satisfy (PA1)-(PA5). We display a Leonard system Φ over F that has parameter array (21). Recall the vector space V with dimension d + 1. Pick a basis {u i } d i=0 for V . Define A, A * ∈ End(V ) with the following matrix representations with respect to {u i } d i=0 : A : is a pre Leonard system on V . We show that Φ is a Leonard system on V . To this end we show the following (56)-(64) for 0 ≤ i, j ≤ d: and Proposition 7.6 implies (56), (58), (62). Lemma 7.7 implies (64). Before proceeding we make some comments. The element E * 0 is normalizing by Proposition 7.6. By (PA5) and Lemma 11.2, there exists β ∈ F such that let ϑ i denote this common value. Note that ϑ 1 = φ 1 = ϑ d . For notational convenience define ϑ 0 = 0 and ϑ d+1 = 0. We show (60). The sequence {ϑ i } d+1 i=0 satisfies Proposition 13.4(i). By Proposition 13.4 the sequence {ϑ i } d+1 i=0 is β-recurrent. Next we apply Proposition 16.4. We mentioned earlier that {θ i } d i=0 is (β, γ, ̺)-recurrent and {θ * i } d i=0 is β-recurrent. By these comments and Proposition 16.4, the scalars β, γ, ̺ satisfy (TD1). For 0 ≤ i, j ≤ d we multiply each term in (TD1) on the left by E i and on the right by E j . This yields where P is from (46). By Proposition 14.2(ii) we obtain P (θ i , θ j ) = 0 if d > i − j > 1. By this and (65) we have We have shown (60). We show (61). We may assume d ≥ 2; otherwise there is nothing to prove. We show We mentioned earlier that ϑ 1 = ϑ d . These comments and Proposition 8.
We show (57)  We have shown (56)-(64), so Φ is a Leonard system on V . By construction Φ has eigenvalue sequence {θ i } d i=0 , dual eigenvalue sequence {θ * i } d i=0 , and first split sequence is the first split sequence of Φ ⇓ and hence the second split sequence of Φ. We showed is the second split sequence for Φ. By these comments and Definition 9.6, the sequence (21) is the parameter array of Φ. By Proposition 9.8 the Leonard system Φ is unique up to isomorphism. ✷

Characterizations of Leonard systems and parameter arrays
We are done discussing Theorem 10.1. In this section we discuss some related results concerning Leonard systems and parameter arrays.
We comment on notation. Let in F and a basis for V with respect to which A : is the parameter array of Φ. Proof. Apply Proposition 7.6 and Lemma 7.7 to both Φ and Φ ⇓ .
Using Theorem 18.1 we can easily recover the following result.
of scalars in F that satisfies (PA1), (PA2). Then the following are equivalent: (i) the sequence (68) satisfies (PA3)-(PA5); (ii) there exists an invertible G ∈ Mat d+1 (F) such that both Proof. (i) ⇒ (ii) By ) is a pre Leonard system on V . We show that Φ is a Leonard system on V . By linear algebra there exists a basis The matrix representations of A and A * with respect to {v i } d i=0 are shown in Theorem 18.1(ii). The pre Leonard system Φ satisfies Theorem 18.1(i) and Theorem 18.1(ii), so by that theorem Φ is a Leonard system on V . Also by that theorem (68) is the parameter array of Φ, so (68) satisfies (PA3)-(PA5).

The intersection numbers
We now bring in the intersection numbers. Throughout this section Φ = (A; form a basis for V , said to be Φ-standard. With respect to this basis A : where c 0 = 0 and b d = 0. We call the intersection numbers of Φ. They are discussed in [22,Section 11]. The intersection numbers where c * 0 = 0 and b * d = 0. With respect to a Φ * -standard basis for V , We mention a handy recurrence.
Proof. The proof of (72) is similar to the proof of (69). To obtain (72), use the fact that for 0 = ξ ∈ E 0 V the vector (A * − a * 0 I)ξ is an eigenvector for A with eigenvalue θ 1 , and To get (73), apply (72) to Φ * .
, exchange starred and nonstarred symbols everywhere in the above equations.
The matrix B * on the right in (71) represents A * with respect to {E i ξ} d i=0 . The matrix D * on the right in (12) represents A * with respect to {τ i (A)ξ} d i=0 . By linear algebra B * T = T D * . The entries of this matrix give the equations (76). To obtain (77), in the above argument replace the basis {τ i (A)ξ} d i=0 by the basis {η i (A)ξ} d i=0 , and replace the matrix on the right in (12) by the matrix on the right in (20). To obtain (74) In Lemma 19.4 we see some fractions. In order to simplify these fractions, we consider the products Note that ψ 1 = 1. Using Definition 5.1 we find that for 1 ≤ i ≤ d − 1, .
We now describe the scalars {ψ i } d−1 i=1 in detail. To avoid trivialities we assume that d ≥ 3. Let β + 1 denote the common value of (22).
Proof. Evaluate the formulas in Lemma 19.4 using (79) and Corollary 19.7.
We mention some attractive formulas involving the intersection numbers; similar formulas apply to the dual intersection numbers.
Next we obtain {θ i } d i=0 in terms of the intersection numbers and dual eigenvalues. We give two versions, that resemble [ Proof. To find θ i for 0 ≤ i ≤ d − 1, solve (80) for θ i and use induction on i. The formula for θ d comes from (82).
Next we obtain some results about duality.
Lemma 19.13. For 0 ≤ i, j, r, s ≤ d such that i + j = r + s and r = s, Proof. Use the data in Lemma 12.3.
Proof. Compare the formulas for b i , b * i in Proposition 19.8.
Proposition 19.20. For d ≥ 2, To get {b * i } d−1 i=0 and {c * i } d i=1 , exchange the symbols θ, θ * everywhere in the above equations. Proof. To get b 0 , set i = 0 in Lemma 19.4(i). To get c d , set i = d in Lemma 19.4(ii) and eliminate φ d using (PA4). We now compute b i , c i for 1 ≤ i ≤ d − 1. Let i be given. For the equations (74) at j = 1 and j = 2, eliminate a i using (69) to obtain a system of two linear equations in the unknowns b i , c i . Using linear algebra we routinely solve this system for b i , c i , and in the solution eliminate ϕ 2 using (85). We have obtained , apply the above arguments to Φ * . Our next goal is to give a variation on Proposition 19.20 that involves a 0 , a * 0 and c 1 , c * 1 . Lemma 19.21. For d ≥ 2, ϕ 2 is equal to each of (c 1 − a 0 + θ 1 )(θ * 2 − θ * 0 ), (c * 1 − a * 0 + θ * 1 )(θ 2 − θ 0 ).
Proof. Set i = 1 in (69), Lemma 19.4(i), (80). The resulting equations show that ϕ 2 is equal to the expression on the left in (86). Note that ϕ 2 is unchanged if we replace Φ by Φ * . Therefore ϕ 2 is equal to the expression on the right in (86).
We clarify how c 1 , a 0 are related and how c * 1 , a * 0 are related. Lemma 19.22. For d ≥ 2 we have .
Proof. To verify the first equation, express the left-hand side in terms of ϕ 2 using Lemma 19.21, and in the resulting equation express a 0 in terms of ϕ 1 using Lemma 8.
To get {b * i } d−1 i=0 and {c * i } d i=1 , exchange starred and nonstarred symbols everywhere in the above equations.
Proof. To get b 0 , set i = 0 in Lemma 19.4(i) and evaluate the result using ϕ 1 = (θ * 0 −a * 0 )(θ 1 − θ 0 ). To get c d , set i = d in Lemma 19.4(ii) and evaluate the result using φ d = (θ * d −a * 0 )(θ 1 −θ 0 ). To obtain b i , c i for 1 ≤ i ≤ d − 1, we proceed as in the proof of Proposition 19.20. Let i be given. For the equations (74) at j = 1 and j = 2, eliminate a i using (69) to obtain a system of two linear equations in the unknowns b i , c i . Using linear algebra we routinely solve this system for b i , c i , and in the solution eliminate ϕ 1 , ϕ 2 using ϕ 1 = (θ * 0 − a * 0 )(θ 1 − θ 0 ) and , apply the above arguments to Φ * . Note 19.24. Lemma 19.22 and Proposition 19.23 effectively give the intersection numbers (resp. dual intersection numbers) in terms of a * 0 (resp. a 0 ) and The scalars a 0 and a * 0 are related by the first equation in Lemma 19.18. Note 19.25. There is a Leonard system attached to the Bose-Mesner algebra of a Qpolynomial distance-regular graph; see [2, p. 260], [12]. For this Leonard system a 0 = 0, a * 0 = 0 and c 1 = 1, c * 1 = 1.

Appendix: Parameter arrays and intersection numbers
The parameter arrays are listed in [23,Section 5]. In this appendix we go through the list, and for each parameter array we give the corresponding intersection numbers and dual intersection numbers.
To get {b * i } 2 i=0 and {c * i } 3 i=1 , in the above formulas exchange h ↔ h * , s ↔ s * and preserve r.