Finding normal modes of a loaded string with the method of Lagrange multipliers

We consider the normal mode problem of a vibrating string loaded with n identical beads of equal spacing, which involves an eigenvalue problem. Unlike the conventional approach to solving this problem by considering the difference equation for the components of the eigenvector, we modify the eigenvalue equation by introducing matrix-valued Lagrange undetermined multipliers, which regularize the secular equation and make the eigenvalue equation non-singular. Then, the eigenvector can be obtained from the regularized eigenvalue equation by multiplying the indeterminate eigenvalue equation by the inverse matrix. We find that the inverse matrix is nothing but the adjugate matrix of the original matrix in the secular determinant up to the determinant of the regularized matrix in the limit that the constraint equation vanishes. The components of the adjugate matrix can be represented in simple factorized forms. Finally, one can directly read off the eigenvector from the adjugate matrix. We expect this new method to be applicable to other eigenvalue problems involving more general forms of the tridiagonal matrices that appear in classical mechanics or quantum physics.


Introduction
The vibration of a loaded string is one of the highest-level problems appearing in classical mechanics curriculum that involves an eigenvalue problem and requires various mathematical techniques to attack. It is a gateway to understanding quantum mechanics, because it has an analogous mathematical structure of discrete eigenvalues like the quantum mechanical infinite potential well problem. The problem is also another gateway to approaching classical and quantum field theories, because the continuum limit that takes d → 0 and n → ∞ with d(n + 1) = L fixed leads to the concept of fields. Here, n is the number of beads, d is the equal spacing, and L is the length of the string.
In classes for classical mechanics, the string vibration is introduced after the coupled harmonic oscillator [1]. While the mathematical structures of the two problems are essentially the same, the approaches to the two problems in Jungil Lee: Director of the Korea Pragmatist Organization for Physics Education (KPOPE ).
B Jungil Lee jungil@korea.ac.kr 1 Department of Physics, Korea University, Seoul 02841, Korea ordinary textbooks are rather different: they solve the eigenvalue equation of the form (A − λ1) (λ) = 0 for a lower dimensional problem like the coupled oscillator by employing the Gaussian elimination to find the relation among the components of an eigenvector after having found an eigenvalue which is a solution for the secular equation Det[A − λ1] = 0. Here, A is an n × n square matrix determined by the system, λ is an eigenvalue, and (λ) is the corresponding eigenvector. This approach is not adequate for the problem of a vibrating string loaded with n beads, because the computation with an n × n matrix with a large or arbitrary integer n is too awkward to deal with. Thus, the method of difference equation is employed. Actually, the difference equation for the components of each eigenvector is the same as that for the secular determinant Det[A − λ1] with different boundary conditions [2,3]. The approach employing the three-term difference equation stems from the fact that the matrix A is a tridiagonal square matrix, because only the nearest neighbors on the left and right of a given bead interact with the bead. Thus, the difference equation consists of three consecutive sequenced terms and the equation can be solved in a manner similar to that used to solve a second-order ordinary linear differential equation [4]. This conventional approach determines every component of the eigenvector with the boundary conditions that the end points are fixed. These boundary conditions determine the eigenvalues simultaneously without solving the secular equation Det[A − λ1] = 0. The eigenvalues for a lower dimensional problem like the coupled harmonic oscillator are usually found by solving the secular equation first.
In this paper, we focus on developing a single strategy to solve these two essentially the same problems. By employing the Lagrange-undetermined-multiplier approach recently developed in Ref. [5], we modify the eigenvalue equation into (A − λ1 + α1) (λ) = αc, where α is a free parameter that regularizes the original matrix A − λ1 in the secular determinant and the regularized one A − λ1 + α1 is no longer singular. The identity matrix 1 and an arbitrary column vector c multiplied by α are matrix-valued Lagrange undetermined multipliers. The substitution of the constraint equation α = 0 is to be delayed until we solve the eigenvector (λ) for this modified equation. The regularized eigenvalue equation is then solvable by multiplying it by the inverse matrix (A − λ1 + α1) −1 . We shall find that the regularized secular determinant D n (λ, α) ≡ Det[A − λ1 + α1] approaches a linear function of α in the limit α → 0 as long as the system does not have degenerate eigenstates, which is true if the displacement of every bead is along a single axis. As a result, the eigenvector corresponding to the eigenvalue λ = λ i can be expressed as We shall find that the matrix in front of c is indeed the adjugate adj(A − λ i 1) of the original matrix and this adjugate is always well defined even when A − λ i 1 is singular. That every column and row of the adjugate matrix adj(A − λ i 1) are parallel to the eigenvectors (i) and (i) † , respectively, is remarkable. To our best knowledge, this result is new. This paper is organized as follows: in Sect. 2, we review the conventional approach to find the normal modes of a vibrating string. In Sect. 3, we present an alternative approach employing matrix-valued Lagrange undetermined multipliers to solve the problem given in Sect. 2. Conclusions are given in Sect. 4 and a rigorous derivation of the inverse matrix appearing in the regularized eigenvalue problem is given in appendices.

Conventional approach
In this section, we review the conventional approach to find the normal modes of a loaded string that can be found in textbooks such as Refs. [1][2][3] on classical mechanics. Consider a vibrating string of length L = (n + 1)d and tension τ loaded with n identical beads of mass m with equal spacing d. Here, we restrict ourselves to the simplest case in which only the transverse motion along a single axis is allowed and the deviation from the equilibrium position of the kth bead at the longitudinal coordinate x k = kd is denoted by q k for k = 1 through n, as shown in Fig. 1.
We require both ends q 0 and q n+1 to be fixed: q 0 = q n+1 = 0. In the continuum limit q k → q(x), where q(x) and x are the continuum counterparts of q k and the longitudinal coordinate x k for the kth bead, respectively, the difference equation collapses into a second-order ordinary differential equation for q(x), and the above requirement becomes the Dirichlet boundary condition q(0) = q(L) = 0.
The kinetic energy T of the system of particles is given by where M = m1 is the mass matrix, 1 = (δ i j ) is the n × n identity matrix, and q = (q 1 q 2 . . . q n ) T . Here, δ i j is the Kronecker delta. The potential energy of a string segment between the kth and the (k + 1)th beads is 1 2 τ (q k+1 − q k ) 2 /d neglecting the gravitational potential energy. This approximation is valid in the limit |q k+1 − q k | d for all k from k = 0 through n. Thus, the potential energy of the string is where A = (A i j ) and for i, j = 1, . . ., n the i j element of the matrix is given by A is apparently real and symmetric. This is a band matrix with the main diagonal elements all equal to 2, and the elements of the first diagonals below and above are all equal to −1. The explicit form of A is The Lagrangian of the system is L = T − U and the corresponding Euler-Lagrange equations are given bÿ where 0 = (0 . . . 0) T is the null column vector. The equation of motion is a system of linear difference equations. The standard approach to solving such a system of equations can be found, for example, in Refs. [3,4]. Assuming the separation of variables, one can introduce a trial solution in which the space and the time components are completely factored. The where the eigenvector for the normal mode with frequency ω is independent of time. Then, we end up with the eigenvalue equation We rescale the equation into by introducing the dimensionless eigenvalue λ as Note that we have introduced a superscript (λ) in (λ) to indicate that the eigenvector is for the normal mode corresponding to the dimensionless eigenvalue λ. The eigenvalue where r runs from 1 through n. Because q 0 = q n+1 = 0 at any time t, we have set ψ (λ) 0 = ψ (λ) n+1 = 0. The standard method to solve the difference equation (10) can be found, for example, in Ref. [4]: one can substitute the trial solution ψ (λ) r = κ r with κ = 0 into Eq. (10) to find that The solution for the parameter κ is Then, the solution can be expressed as a linear combination ψ where a 1 and a 2 are constants. As is shown in Ref. [4], the difference equation has only the trivial solution ψ (λ) (i) If λ > 4 or λ < 0, then κ + and κ − are real and distinct.
In addition, the boundary conditions ψ (λ) The solution is trivial: ψ (λ) r = 0, because a 1 = a 2 = 0. Thus, any λ in the region λ < 0 or λ > 4 is not an eigenvalue. Substituting the trial solution into the difference equation (10) we find that By imposing the boundary conditions ψ The condition ψ (λ) Thus, the angle θ is determined as As a result, we find the n eigenvectors and the corresponding eigenvalues as Here, the superscript (s) stands for (λ s ) and λ s is a discrete eigenvalue λ that depends on the integer s. From Eq. (22), manifestly, no degeneracy occurs among any of the n eigenstates. The dimensionless eigenvalue λ s is monotonically increasing as s increases from 1 to n. The dimensionless eigenvalue λ s has the minimum value λ 1 = 4 sin 2 [ 1 2 π/(n + 1)] at s = 1 and has the maximum value λ n = 4 sin 2 [ 1 2 nπ/(n + 1)] at s = n. In the continuum limit, they approach the limits λ 1 → 0 and λ n → 4.
The normalization of the eigenvector is set to be and one can check the orthonormality of the eigenvectors by making use of the trigonometric identity [6] n k=0 sin rkπ n + 1 sin Hence, the solution for q can be expressed as a linear combination of the eigenvectors where every eigenvector is independent of time and all of the time dependences are contained only in the normal coordinates η s (t) satisfying the equation of motion The orthonormal relation in Eq. (24) can be used to project out the normal coordinate If the initial condition of the string is given by q(0) and its time derivativeq(0), then the corresponding initial conditions for the normal coordinates are determined as η s (0) = (s) † q(0) andη s (0) = (s) †q (0).

Lagrange-multiplier approach
In this section, we adopt the Lagrange-undeterminedmultiplier approach developed in Ref. [5], instead of the conventional one described in the previous section. By introducing a parameter α, that is actually vanishing, we modify the indeterminate linear equation in Eq. (8) as where c is an arbitrary complex column vector The modification involves matrix-valued Lagrange undetermined multipliers 1 and c multiplied by the parameter α, and in this case, the constraint equation will be The role of the matrix-valued multipliers is basically the same as the conventional one that is a number. This trick does not alter the equation at all. However, the modification in Eq.
for an arbitrary vector c yielding a non-vanishing result. In order to investigate the structure of Eq. (32), we define the α-dependent determinant D n (λ, α) where the subscript n of D n (λ, α) indicates that the matrix A−λ1+α1 is an n ×n square matrix. That D n (λ, α = 0) = D n (λ) ≡ Det[A−λ1], which is vanishing for an eigenvalue λ, is apparent. Both D n (λ) and D n (λ, α) can be derived from the recursion relation among the determinants of the matrices of various dimensions. Details of the derivation are provided in Appendix A. From Eq. (A14), the explicit form of the α-dependent determinant is given by The secular equation D n (λ, 0) = 0 determines the eigenvalue λ. We shall find in Eq. (44) that no degenerate normal frequency exists. Thus, the determinant at a given eigenvalue λ = λ i must be factorized as In the limit α → 0, the product j =i (α + λ j − λ i ) on the right side collapses into a non-vanishing number independent of α, because every eigenvalue is distinct. Thus, The generalization of the method including degeneracies is presented in Ref. [5].
Hence, the eigenvector (i) corresponding to a given eigenvalue λ i can be factorized as the product of two 'finite' quantities Because c is arbitrary, c can absorb the first factor by redefinition: [lim α→0 α/D n (λ i , α)]c → c. Then, without loss of generality, the eigenvector can be written in a compact form which is essentially identical to Eq. (32). However, Eq. (37) is more practical than Eq. (32), because the factor 1/D n (λ i , α) coming from the inverse matrix (A − λ i 1 + α1) −1 cancels the prefactor in Eq. (37). The matrix multiplied by an arbitrary column vector c in Eq. (36) is the projection operator that projects the arbitrary vector onto the eigenvector for the eigenvalue λ i , and it is nothing but the adjugate of the matrix A − λ1 Note that this identity holds for any eigenvalue problem without degeneracy.
If we substitute the explicit matrix elements in Eq. (3) for the loaded string system, then we find that every element of the adjugate can be factorized into the product of two factors of the form Here, A [k×k] and 1 [k×k] are k × k square matrices for k ≤ n that are analogous to A and 1 and D n ≡ D n (λ). A rigorous proof of the identity in Eq. (39) is presented in Appendix B. The matrix representation for adj(A − λ1) for a loaded string system that we consider is explicitly given as and the jth column of this matrix is Here, the index of the front factor in each element increases from 0 to j − 1 by a single unit as the row index increases and the index reaches j − 1 at the jth row and is fixed after that. The index of the next factor in each element remains fixed as n − j until the row index reaches j, and after that, it starts to decrease from the ( j + 1)th row and reaches 0 at the bottom. According to Eq. (A14), an explicit form of D n is given by If λ is an eigenvalue, then D n = 0. Thus, the angle θ for the eigenvalue λ s is determined as Therefore, the eigenvalues are obtained as While D n λ=λ s is always vanishing, D k λ=λ s is not vanishing for any k < n From here on, we adopt the convention D k ≡ D k λ=λ s . Then, using the trigonometric relation we deduce a useful relation between D k and D n−k−1 With the help of the identity in Eq. (47), the jth column of the adjugate can be simplified as we find that the jth column of the adjugate can be written in the form As a result, every column of adj(A − λ s 1) is parallel to the eigenvector (s) Note that for an arbitrary column vector c in Eq. (30), one can easily check that (s) is indeed the eigenvector for the eigenvalue λ s in Eq. (44) where c r is the r th element of the column vector c and the symbol ⊗ represents the direct product. In conclusion, we can read off the eigenvector directly from the column (or row) of the adjugate, that is represented in a simple form in Eqs. (41) and (50). As a result, we can construct the completeness relation for any eigenvalue equation We can make an educated guess that the coefficient a s of adj(A − λ s 1) in the summation in Eq. (54) depends on A and the explicit value a s = 2 sin 2 sπ n+1 /[(−1) s+1 (n + 1)] represents the nature of the loaded string system.

Conclusions
We have considered the normal mode problem of a vibrating string loaded with n identical beads of equal spacing which involves the eigenvalue problem (A − λ1) (λ) = 0 given in Eq. (8). The conventional approach to solving this problem that can easily be found in most textbooks on classical mechanics is to solve the difference equation for the components ψ (λ) r of the eigenvector (λ) , which is the same as that for the secular determinant D n ≡ Det[A − λ1] except that the boundary conditions are different: ψ (λ) Unlike the conventional approach, we have modified the eigenvalue equation as (A − λ1 + α1) (λ) = αc in Eq. (29), where α eventually vanishes at the end of problem solving with the constraint equation α = 0. The identity matrix 1 and an arbitrary column vector c multiplied by α behave as matrix-valued Lagrange undetermined multipliers. While the matrix A − λ1 for the original eigenvalue equation is singular, the regularized one A − λ1 + α1 is not. Thus, we can solve the regularized linear equation for the eigenvector (λ) by multiplying the equation by the inverse matrix The string vibration with the displacement along only a single axis is non-degenerate. Thus, the regularized secular determinant D n (λ, α) is a polynomial function of α with the leading contribution of degree 1 as α → 0. By making use of this limiting behavior, we have shown that the eigenvector can be expressed as The matrix in front of c is well defined even when we take the limit α → 0, while A − λ i 1 + α1 becomes singular. We have indeed shown that the matrix Furthermore, we have shown that every column and every row of the adjugate matrix adj(A − λ i 1) are parallel to the eigenvectors (i) and (i) † , respectively, which is apparent in Eq. (53). As a result, the completeness relation 1 = n s=1 (s) ⊗ (s) † can be expanded as a linear combination of the adjugates adj(A − λ s 1) summed over the eigenvalues λ s as is shown in Eq. (54). Detailed mathematical proofs for the essential identities used to derive the results listed above are summarized in the appendices. Our derivation reveals that the regularization technique for the eigenvalue equation by employing the matrix-valued Lagrange undetermined multipliers first introduced in Ref. [5] is quite useful in obtaining every eigenvector of an eigenvalue problem. To our best knowledge, the approach we have presented for reading off the eigenvector of any eigenvalue problem from the adjugate matrix and the remainder such as the factorization formula in Eq. (39) and the completeness relation in Eq. (54) that derives from the adjugate formula in Eq. (38) are all new.
While we have restricted ourselves to a special case ψ 0 = ψ n+1 = 0 that is consistent with the Dirichlet boundary condition ψ(0) = ψ(L) = 0 in the continuum limit, the method developed in this paper can, in principle, be applied to any boundary conditions. One can then consider, for example, a more general Dirichlet boundary condition that ψ 0 and/or ψ n+1 are non-vanishing constants, or even the case analogous to the Neumann boundary condition in which ψ (0) and ψ (L) are constants in the continuum limit. Slightly more complicated situations, such as a loaded string with a few concentrated masses [7,8], that with a viscous damping [9], or a hanging string with a tip mass [10,11] can also be considered. Applying our method to more general cases and comparing the solutions with those in the continuum limit would be instructive. The application of other boundary conditions requires a corresponding modification of the matrix A, in particular, the first and the last rows and columns of A. Then, one can obtain the corresponding eigenvectors from the adjugate of the matrix A − λ1 with the modified matrix, but the complete form of the eigenvectors is expected to depend on the boundary condition.
More generally, the method developed in this paper can be applied to any eigenvalue problem involving a tridiagonal matrix. As an example, one can consider the eigenvalue problem of the angular momentum matrix J x , which is a tridiagonal matrix, if we choose J z to be diagonal as usual. In this case, the first diagonal elements above and below the main diagonal are not universal, but constant only in the submatrix corresponding to the angular momentum J . We can make an educated guess that the adjugate expression for the eigenvectors in Eq. (38) will be useful in this eigenvalue problem and that the complete form of the eigenvectors could be determined from the adjugate form. We expect this method to be applicable to other eigenvalue problems having a more general form of the tridiagonal matrix.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecomm ons. Here, M ji is the (n − 1) × (n − 1) submatrix of A − λ1 that is identical to A − λ1 except that the jth row and the ith column are removed. M ji ≡ Det[M ji ] is the ( j, i)-minor of A − λ1.
When i ≥ j, the (n − 1) × (n − 1) submatrix M ji is given by where d ≡ 2 − λ and the omitted elements of the matrix are all vanishing. Then, by applying the elementary row operations of the matrix that preserve the value of the determinant, we can always transform the submatrix M ji into the upper triangular form. In that case, the determinant of the matrix is just the product of the diagonal elements. Hence, we conclude that When i ≤ j, we perform similar operations and find that Then, by applying the elementary column operations of the matrix, we can always transform the submatrix M ji into the lower triangular form keeping its determinant unchanged. Again, the determinant of the matrix is just the product of the diagonal elements. Hence, we conclude that