Lagrange-multiplier regularization of eigenproblem for Jx

We solve the eigenproblem of the angular momentum Jx\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$J_x$$\end{document} by directly dealing with the non-diagonal matrix unlike the conventional approach rotating the trivial eigenstates of Jz\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$J_z$$\end{document}. Characteristic matrix is reduced into a tri-diagonal form following Narducci–Orszag rescaling of the eigenvectors. A systematic reduction formalism with recurrence relations for determinants of any dimension greatly simplifies the computation of tri-diagonal matrices. Thus the secular determinant is intrinsically factorized to find the eigenvalues immediately. The reduction formalism is employed to find the adjugate of the characteristic matrix. Improving the recently introduced Lagrange-multiplier regularization, we identify that every column of that adjugate matrix is indeed the eigenvector. It is remarkable that the approach presented in this work is completely new and unique in that any information of Jz\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$J_z$$\end{document} is not required and only algebraic operations are involved. Collapsing of the large amount of determinant calculation with the recurrence relation has a wide variety of applications to other tri-diagonal matrices appearing in various fields. This new formalism should be pedagogically useful for treating the angular momentum problem that is central to quantum mechanics course.


Introduction
Angular momentum is one of the central themes of quantum mechanics (QM). Its properties drastically change from those of classical mechanics, including new spin degrees of freedom, which are absent in classical mechanics. Furthermore, its values are quantized as integer or half-integer multiples of the Planck constant ℏ that cannot be expected in classical mechanics either. Because of this distinct nature of angular momentum in QM, most of the textbooks on QM spend a large part of the volume for describing the problems related with angular momentum.
Angular momentum in QM is described by operators that satisfy the following commutation relations: Conventional approach to the problem starts with the observation that J 2 ≡ J 2 x + J 2 y + J 2 z is a Casimir operator that commutes with all the other angular-momentum operators J x , J y , and J z . Then one can construct the simultaneous eigenstate of J 2 and J z , where J z is chosen by convention. With some algebra, it can be shown that the magnitude square of the eigenvalue of J z is bounded from above and the upper bound is the angular-momentum quantum number j that determines the eigenvalue j(j + 1)ℏ 2 of J 2 . Ladder operators, which are defined as J ± = J x ± iJ y , change the eigenstate of J z to the state with higher (or lower) eigenvalues by ℏ , keeping the eigenvalue of J 2 unchanged. As a result, the eigenvalues of J z , which are the angular-momentum components along the z axis, are quantized as integer or half-integer multiples of ℏ . In the bases constructed by the eigenstates of J z , the matrix representation of the operator J z has the form of diagonal matrix and eigenvectors are simple unit column vectors all of which components are zero except for a single non-zero component equal to unity. From this, the properties of J x (or J y ) can be deduced as follows: since it is related with J z by unitary transformation, it shares the same eigenspectrum. The eigenstates can be calculated by rotating the trivial eigenstates of J z with unitary operator U = exp[−i 2 J y ∕ℏ] . (See Ref. [1], for example.) Although the conventional method is quite neat and simple, it is still worth pursuing the wholly new approaches both pedagogically and practically. First, the conventional (1) [J x , J y ] = iℏJ z ; [J y , approach requires the knowledge on some advanced subjects such as Lie algebra or elementary group theory. Furthermore, the tricks for the manipulation of the matrices that appear in angular-momentum theory will have a wide variety of applications. In addition, the new setup which is completely distinct from the conventional one is valuable of its own. From this perspective, Narducci and Orszag [2] suggested a distinct approach to the eigenvalue problem of J x . Instead of resorting to the result of eigenvalue problem of J z , they directly derived the eigenvectors by solving the difference equation that governs the relationship among the components of the eigenvectors. By rescaling the components of the eigenvector, they transformed the recurrence relation into a convenient form with integral coefficients and constructed the generating function with ease. From the analyticity condition of the generating function, the quantized spectrum of the eigenvalues were deduced all at once. As a result, the components of the eigenvectors were derived in a compact form with Jacobi polynomials. The Lagrange-multiplier-regularization formalism of an indeterminate equation was first developed in Ref. [3] to solve a simple system of indeterminate linear equations. The formalism was generalized to the eigenproblem in Ref. [4] and further developed to include the adjugate representation in Ref. [5]. Conventionally, solving an eigenproblem consists of finding the eigenvalues from the secular (characteristic) equation and determining the eigenvectors with Gaussian elimination. The secular equation, the condition of vanishing determinant of the characteristic matrix, forces the eigenvalue equation indeterminate. This situation can be changed by introducing matrix-valued Lagrange multipliers that make the eigenvalue equation deterministic. In that sense, we regularize the eigenvalue equation. Before applying the constraint equation, we can find out the eigenvector that depends on the fictitious parameter and the parameter-independent part will be the eigenvector of the original eigenvalue equation. Actually, we can construct the projection operator that projects the arbitrary column vector to the eigenvector. Especially in any non-degenerate case, it is nothing but the adjugate of the characteristic matrix [5].
In this paper, we provide a completely new setup for the eigenproblem of J x . Following the work of Narducci and Orszag, we adopt the rescaling of the eigenvector and transform the matrix of the operator J x to a tri-diagonal form. With the elementary operation of matrix introduced by Mazza [6], we find that the determinant of the characteristic matrix satisfies a simple recurrence relation, which automatically generates the factorized formula of the secular determinant. From that factorized formula, the eigenvalues are easily determined without relying on cumbersome expansions of the secular determinant and root finding. In addition, by adopting the Lagrange undetermined-multiplier approach suggested in Refs. [4,5], we successfully derive the eigenvectors just by reading off the column of the adjugate matrices. Quite remarkably, even the adjugate matrix itself can be calculated by the recurrence relations and cast into a simple factorized form. In the whole process, all we need to do are algebraic operations rather than cumbersome matrix manipulations. In that sense, this approach will have the sufficient pedagogical meaningfulness. Furthermore, it could also be applied to more general classes of problems involving various tri-diagonal matrices, which appear frequently in diverse areas of science, including physics, mathematics, biology, computer science, and so forth. This paper is organized as follows: In Sect. 2, we review the Narducci-Orszag method for the eigenproblem of J x . In Sect. 3, we present an alternative approach employing matrixvalued Lagrange undetermined multipliers to solve the eigenproblem of J x . Conclusions are given in Sect. 4 and rigorous proofs for useful identities and some explicit matrix representations for eigenvectors of J x are given in Appendices.

Approach by Narducci and Orszag
In this section, we review the approach, introduced by Narducci and Orszag in Ref. [2], of generating function from which the eigenvalues and eigenvectors for J x derive. In the remainder of this paper, we use the natural unit system in which ℏ is set to be unity. Thus one has to multiply ℏ to each angular-momentum operator or matrix and corresponding eigenvalues to restore the physical dimensions.
The matrix element for the operator J x is where , = −j, −j + 1, ⋯ , j and j is a non-negative integer or half-integer. It is a tri-diagonal matrix with vanishing diagonal elements: the matrix elements are vanishing unless | − | = 1 . Because J x is Hermitian, its eigenvalues m are real and the eigenvectors are orthogonal. The eigenstate with the x-component angular momentum m satisfies the following eigenvalue equation: where is the matrix representation of J x with matrix elements given in Eq.
(2) and is the column eigenvector with 2j + 1 rows that corresponds to the eigenvalue m : (2) In ordinary QM textbooks, the eigenvectors are obtained from a trivial column eigenvector of J z , the matrix representation of J z , whose th element is 1 for = m , otherwise 0. Then , where U is the unitary matrix that transforms J z into J x = UJ z U † . The transformation is the rotation about the y axis by an angle of ∕2 . Hence, one should know how to rotate a vector and a matrix and diagonalize a Hermitian matrix to carry out this computation.
Narducci and Orszag did not employ the diagonalization procedure but found the eigenvector by solving the recurrence relation for the components of the eigenvector directly in Ref. [2]. The components of in Eq. (4) satisfy the following recurrence relation: The authors of Ref. [2] employed new indices p and q ranging from 0 to 2j as Because the eigenvector has only 2j + 1 components, one can impose the following boundary conditions that eliminate unphysical components: One can rationalize the coefficients of the recurrence relation by rescaling the element (p) q as All of the coefficients of the resultant recurrence relation for (p) q are integers. Hence, the following recurrence relation is more convenient to deal with in comparison with those in Eqs. (5) and (7) as Ref. [2] The convenience instead has traded off the Hermitian nature of the transformation matrix. Thus the new eigenvectors are not orthogonal any more unless one inserts a weight matrix to define the scalar product. In general, the orthogonality of the eigenvectors requires that The recurrence relation in Eq. (10) can be easily solved if we introduce the generating function which is a power series in a complex variable z whose coefficients are the components of the eigenvector : where the range of z is chosen so that the series on the right side is convergent. Because the right side is a polynomial in z of degree 2j, which is finite, the generating function is differentiable to all orders in the whole complex plane. Thus, f p (z) is an entire function: analytic for any z.
As shown in Ref. [2], we multiply z q to the recurrence relation in Eq. (10) and sum over q from 0 through 2j. Then we can identify the summations with the generating function or its derivative multiplied with a constant or a simple power in z. After this arrangement, we find the differential equation for the generating function f p (z) as Reorganizing the z dependence with partial fractions, we can transform the first-order differential equation in a linear combination of df p (z)∕f p (z) , dz∕(1 − z) , and dz∕(1 + z) . The general solution can then be found as where N p is a constant, which is determined by the normalization condition in Eq. (11).
As was stated earlier, f p (z) must be analytic everywhere because it is a polynomial in z of finite degree 2j. This strong requirement allows us to determine the eigenvalues p straightforwardly because p is a stringent parameter that determines the analyticity of the generating function. The analyticity requirement indeed disallows non-integral or negative powers for ± p + j . This guarantees that f p (z) is free of singularities: there are no poles, no branch points and cuts in the entire z plane for the function f p (z) in Eq. (14). The resultant constraints to p are Note that the inequalities are deduced from the no-pole condition and the integer conditions in the second line are assigned to remove the branch points at z = ±1 . Considering the conditions, we can conclude that p must be an integer or half-integer satisfying In general, all of the allowed eigenvalues in Eq. (16) can be parametrized as in Ref. [2]: According to the definition of the generating function in Eq. (12), we can read off the components of the eigenvector by differentiating the generating function and then substituting z = 0: Substituting the f p (z) with its explicit form provided in Eq. (14) into Eq. (18), we obtain the result where P ( , ) n (z) is the Jacobi polynomial [7]: Note that we have also made use of the explicit form of the normalization factor which is proven in Appendix of Ref. [2]. By making use of the relation in Eq. (9) and recovering the original indices with Eq. (6), we can finally reproduce the components of the eigenvector as in Ref. [2]as follows:

Undetermined-multiplier regularization procedure
Let us consider an eigenproblem as follows: where A and the identity 1 are n × n matrices, and is the null vector. If we do not use the transformation technique to diagonalize the matrix A and do not solve the recurrence relation as shown in Sect. 2, then we are left with the elementary method called Gaussian elimination after determining the eigenvalue by solving the secular equation ℯ [A − 1] = 0 . Because the determinant is a polynomial in of degree n, there are n eigenvalues. Once an eigenvalue is substituted to Eq. (23), then is orthogonal to every row of A − 1 . Thus the rank of the matrix A − 1 is less than n, which is consistent with the constraint ℯ [A − 1] = 0 . Hence, one cannot solve the equation by multiplying (A − 1) −1 which does not exist. The essential reason for vanishing determinant is that the number of components of the eigenvector is greater than the number of independent simultaneous linear equations. This kind of equation is called the indeterminate linear equation.
In this section, we adopt an alternative approach developed in Ref. [4] in which the Lagrange undetermined multipliers are introduced to regularize the characteristic equation in Eq. (23). This regularization introduces an additional parameter satisfying the constraint equation that increases the number of linearly independent constraints. In order to compensate the lacking degree of freedom in the matrix A − 1 , we can add any matrix multiplied by the parameter to A − 1 as long as the determinant becomes non-zero. The simplest choice is to add 1 . If we do not modify the right side, then we obtain the trivial solution which is not an eigenvector. Thus we add an arbitrary vector c multiplied by to the right side. The resultant regularized form is where c is an arbitrary non-zero complex column vector: We are ready to find . When we employ Lagrange undetermined multipliers in Lagrangian mechanics, we use least-action principle to derive the Euler-Lagrange equation without imposing the constraint equation because we need to compensate the lacking degrees of freedom by the Lagrange multipliers. Here, not but 1 and c do compensate the lacking degree of freedom. So we must not impose the constraint equation = 0 before we solve the linear equation. Because of this regularization procedure, ℯ [A − 1 + 1] ≠ 0 and the inverse matrix (A − 1 + 1) −1 indeed exists. We first multiply the inverse (A − 1 + 1) −1 to Eq. (25) to compute :

Because
is the solution to the original equation in Eq. (23), we isolate the -independent contribution from by introducing as Here, must be of order 0 or higher and contains all of the dependence in . According to Eq. (28), must be expressed as We observe from this identity that the left side is independent of and, therefore, the -dependent contributions on the right side must cancel exactly. Thus, the right side is completely independent of to all orders. This complete decoupling of dependence is analogous to what happens in the gauge invariance when we use the gauge-fixing term in the Lagrangian density of the gauge field. Because the longitudinal direction parallel to only survives both sides, the transverse direction that is perpendicular to on the right side must cancel. Manifestly, the expression in Eq. (29) is allowed for any value of . Let us choose the simplest value = 0 for convenience, where we have discarded the term that vanishes at = 0 . Note that one must not discard the term if ≠ 0 because this has a crucial role that cancels dependence in the former term exactly. The expression in Eq. (30) is a well-defined function at = 0 like sin x∕x which has the limiting value 1 as x → 0 that is identical to the value of the function at exact x = 0 . This explains why (A − 1 + 1) −1 is not defined as → 0 while (A − 1 + 1) −1 c is a welldefined vector at exact = 0.
We have an alternative way to compute the expression in Eq. (30) because the function is continuous at = 0 as is explained above: is no dependence at all. In the case of classical mechanics, the Lagrange undetermined multipliers have physical implications because they determine the constraint forces that allow the system to satisfy the constraint equations. Our Lagrange undetermined multipliers 1 and c indeed have the essential role in that the remnant of the multipliers are smeared inside the final result for . Were it not for these multipliers, the longitudinal direction along could not be allocated to the answer which leads to the trivial solution.
In the case of 1 , it is in the answer after the cancellation with the front factor because the contribution of the inverse emerges as 1∕ in the determinant. In the case of c , we would obtain the trivial solution if we take a vector orthogonal to the eigenvector as c by chance.
In order to investigate the analytic structure of the matrix (A − 1 + 1) −1 rigorously, especially in the limit → 0 , we introduce the -dependent determinant D n ( , ): where the subscript n of D n ( , ) indicates that the matrix A − 1 + 1 is an n × n square matrix. It is apparent that D n ( , = 0) = D n ( ) ≡ ℯ [A − 1] which vanishes for an eigenvalue . In many cases, both D n ( ) and D n ( , ) can be derived easily just from the recursion relation among the determinants of the matrices of various dimensions. When there are no degeneracies in the spectrum of the eigenvalues, D n ( , )∕ at a given eigenvalue = i must be factorized as Because every eigenvalue is distinct, we obtain the finite value for the product on the right side. The generalization of the method including degeneracies is presented in Ref. [4].
According to Eq. (31), the eigenvector corresponding to a given eigenvalue i can be factorized as the product of two 'finite' quantities as follows: Since c is arbitrary, we can rescale c as c∕ ∏ k≠i ( k − i ) → c . Then the eigenvector can be written in a compact form as follows: up to normalization, which is essentially identical to Eq. (31). The matrix multiplied to an arbitrary column vector c in Eq. (35) is the projection operator up to a multiplicative normalization factor that projects the arbitrary vector onto the eigenvector for the eigenvalue i . It is nothing but the adjugate of the matrix A − 1: This identity holds for any eigenproblem without degeneracy. It is remarkable that the analyticity of Eq. (31) is reconfirmed manifest according to the representation of the adjugate matrix which is always defined regardless of the value for its determinant for A − 1.
The formula in Eq. (36) is not only true for a Hermitian matrix H which appears most frequently in physics, but also for any matrix A that has a similarity transformation to H such that H = PAP −1 , ℯ [P] ≠ 0 . Then the eigenvector for A has the common eigenvalue if is an eigenvector of H with the eigenvalue . Unlike the Hermitian matrix case, the vectors do not form orthogonal bases. Consequently, the adjugate matrix, which is the projection operator for a given eigenvalue, is not Hermitian in general. It is manifest with the adjugate matrices presented in Sect. 3.2.

Application to the Eigenproblem of J x
We apply the regularization procedure employing Lagrange undetermined multipliers that is described in Sect. 3.1 to solve the eigenproblem of J x with the angular-momentum quantum number j. We use a superscript [j] in order to indicate the angular-momentum quantum number. The reason for using the superscript [j] is to make formulas less busy and to reduce confusions in using 2j + 1 when we shift indices.
We begin with the eigenvalue problem in Eq. (10), which represents where A [j] is a (2j + 1) × (2j + 1) matrix with a non-negative integer or half-integer jas follows: We observe that a similarity transformation matrix P that where J x is a tri-diagonal Hermitian matrix given by and P is a diagonal matrix whose matrix representation is given by To investigate the eigenvalues of the matrix, we consider the determinant of the matrix. We define D [j] by the determinant of the matrix A [j] − 1: In order to reduce messy expressions involving half-integers, we define a corresponding (n + 1) × (n + 1) square matrix S (n) whose entries are all integers as follows: Note that we have used the superscript (n) in order to identify the matrix representation conveniently. For n = 2j, where the definition of D [j] is given in Eq. (42). We are to obtain the recurrence relations for the determinant to identify the formula for D [j] . The derivation relies on the following determinant reduction formula of a specific tri-diagonal matrix [6,8]: where {a k } is an arithmetic sequence with the common difference a 1 . This relation can be deduced by the elementary operation contrived by Mazza in 1866 [6]: Here, row i and col i indicate the ith row and column, respectively. A recursive application of the rule in Eq. (45) to Eq. (43), we can obtain the recurrence relation for the determinants of the matrices S (n) 's as follows: The recurrence relation of D [j] 's can be deduced by using Eq. (44). As a result, we get With the initial conditions for the first two terms we obtain the determinant in a factored form, d a 1 a n d a 2 a n−1 d a 3 ⋱ ⋱ ⋱ a 3 d a n−1 a 2 d a n a 1 d row 1 + row 3 + row 5 + ⋯ row 2 + row 4 + row 6 + ⋯ row 3 + row 5 + row 7 + ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋅ col n+1 − col n−1 , col n − col n−2 , col n−1 − col n−3 , ⋯ . (49) If there exists a non-trivial solution for the eigenvector , then (A − 1) −1 must not exist. Thus the equation D [j] = 0 determines the eigenvalues of the system. Therefore, for a given j, there are 2j + 1 eigenvalues It is remarkable that the complete spectrum of the magnetic quantum number is determined all at once directly from the factorized formula (50). The reason why we were able to find the factorized formula (50) is that the recurrence relation in Eq. (48) for an arbitrary j is expressed as a factorized form of two determinants of lower dimensions. This determination of the magnetic quantum number is completely independent of well-known derivation with ladder operators. It is also independent of the determination by Narducci and Orszag [2] who made use of the analyticity requirement for the generating function for the eigenvector. Next, we consider the adjugate matrix of A − 1 to derive the eigenvectors of the system. Because the computation of an element of the adjugate matrix involves the calculation of a different kind of matrix that cannot be identified with S (n) in Eq. (43), it is necessary to classify such a matrix. We introduce the following notation S (k) n that suffices our necessity: where k is any integer and we set d ≡ −2 for simplicity. Note that S (n−1) = S (n−1) n . We define (k) n by the determinant of the above matrix as The recurrence relation for the matrices (k) n with a fixed k is given by which collapses every (k) n to be expressed in terms of d and k. A detailed derivation is provided in Appendix A.
With the help of the determinants (k) n , we derive the components of the adjugate matrix adj(A − 1) in a simple factorized form as follows: (54) A rigorous proof of the relation is provided in Appendix B. The explicit forms of the adjugate matrices for j ≤ 3 2 are given below to elucidate the structure of the adjugate matrices as follows: If we put an eigenvalue p into , any column of the adjugate matrix gives the corresponding eigenvector up to normalization. Then by suitable rescaling in Eq. (39) of the each component (p) q , we obtain the eigenvector : In matrix representation, we have As an illustrative example, we consider the case of j = 1 2 . By using the relation in Eq. (55) and recurrence relation in Eq. (54), we can obtain the adjugate matrices We can directly read off the eigenvector from the columns of the adjugate as

Conclusions
We have computed the eigenvectors for the angular-momentum operator J x with an arbitrary angular-momentum quantum number j. In usual textbooks on quantum mechanics, the computation is carried out by rotating a trivial eigenvector for the diagonal matrix J z about the y axis by a right angle. Although the computation is straightforward, it requires the knowledge on the group theoretical analysis involving unitary transformation to perform such a calculation.
Instead, we have found the eigenvectors by reading off a column of the adjugate matrix adj[A − 1] for the eigenvalue equation without relying on the matrix manipulation such as diagonalization procedure involving the calculation of the determinant. The essential nature of the indeterminate linear equation for a general eigenproblem is released by regularizing the equation into . The insertion of 1 fills the lost dimension due to the orthogonal relation between any row of A − 1 and the eigenvector to make the determinant non-zero and the characteristic matrix invertible. And the insertion of respects the boundary condition that the regularized equation reproduces the original equation in the limit → 0 . The eigenvector has been derived by solving linear equation as . By taking the limit → 0 , we have proved that is parallel to every nonvanishing column of adj[A − 1].
While the previous proof in Ref. [4] was restricted to a single limit → 0 , we have generalized the theorem to cover the range of over the whole complex plane. The eigenvector was expressed as , in which the dependence cancels completely. The complete decoupling of the parameter for the undetermined multipliers resembles the gauge invariance that any gauge-dependent parameter disappears even after introducing the gauge-fixing term into the Lagrangian density of a gauge field. Thus the generalized version of the proof allows us not to have to apply the constraint equation = 0 . The Lagrange undetermined multipliers survive inside the answer for the eigenvector since they project the direction parallel to the eigenvector that is orthogonal to any row of the characteristic matrix A − 1 . The complete decoupling of the parameter and the survival of the Lagrange multipliers in the physical quantity are remarkable characteristics of Lagrange multipliers for an indeterminate linear equation. Our results involving the adjugate matrix representation and the complete decoupling of are new.
For completeness of the paper, we have briefly reviewed the analysis by Narducci and Orszag [2] who determined the eigenvalues and eigenvectors by considering the difference equation satisfied by the components of the eigenvectors for J x . While the authors of Ref. [2] employed the recurrence relation in order to derive the generating function containing all of the information of the components, we have presented a new approach to determine the eigenvalues and eigenvectors of J x by introducing the reduction formulas for an arbitrary angular-momentum quantum number j to relate the lower-dimensional expressions. This leads to a great simplification that all of the expressions for both the determinant of the characteristic matrix and every element of the adjugate matrix are in a completely factorized form multiplied by the value of j = either 0 or 1 2 . It is remarkable that the general formula for the magnetic quantum number for any j was achieved by applying recurrence relation for the tridiagonal matrix to another with lower dimensions. Hence, the complete spectrum of the magnetic quantum number is determined all at once directly from the factorized formula (50). This determination of the magnetic quantum number is completely independent of well-known derivation with ladder operators and the generating-function method of Narducci and Orszag [2]. The determination of the eigenvectors involves the computation of the adjugate matrix of an arbitrary j, which is non-trivial. It is also remarkable that this adjugate matrix was reduced into a factorized form of determinants (k) n of lower dimensions. A recursive application of the recurrence relation for (k) n revealed the compact factorized expressions for the eigenvectors that agree with the known results in the usual textbook [1].
Our approach of the adjugate matrix for a specific value of j is quite straightforward while the derivation for an arbitrary j is rather involved.The regularization of the eigenvalue equation and the reduction formalism employed in this work are quite unique and pedagogical. Throughout the derivation, we have not relied on any entity of Lie algebra, contour integrals, or complex variables. Thus even elementary-level undergraduate students can in principle apply this approach. As well as in physics, such a tri-diagonal matrix appears frequently in various areas in mathematics, biology, computer science, and so forth. Our approach can be applied to these eigenproblems as well.
We denote (k) n by the determinant of the matrix S (k) n : Expanding the determinant with respect to the last column, we find that (A2) S (n−1) 1).

(A5)
where the eigenvector with tilde has the unit normalization: The eigenvectors are The conversion formula in Eq. (C1) can be used to find that As a result, we find that where the eigenvector with tilde has the unit normalization: (C23) (C24)