A Power Method for Computing the Dominant Eigenvalue of a Dual Quaternion Hermitian Matrix

In this paper, we first study the projections onto the set of unit dual quaternions, and the set of dual quaternion vectors with unit norms. Then we propose a power method for computing the dominant eigenvalue of a dual quaternion Hermitian matrix. For a strict dominant eigenvalue, we show the sequence generated by the power method converges to the dominant eigenvalue and its corresponding eigenvector linearly. For a general dominant eigenvalue, we show the standard part of the sequence generated by the power method converges to the standard part of the dominant eigenvalue and its corresponding eigenvector linearly. Based upon these, we reformulate the simultaneous localization and mapping (SLAM) problem as a rank-one dual quaternion completion problem. A two-block coordinate descent method is proposed to solve this problem. One block has a closed-form solution and the other block is the best rank-one approximation problem of a dual quaternion Hermitian matrix, which can be computed by the power method. Numerical experiments are presented to show the efficiency of our proposed power method.


Introduction
Dual quaternion numbers and dual quaternion matrices are important in robotic research, i.e., the hand-eye calibration problem [7], the simultaneous localization and mapping (SLAM) problem [1,2,3,5,22,24], and the kinematic modeling and control [19].In [18], Qi and Luo studied right and left eigenvalues of square dual quaternion matrices.If a right eigenvalue is a dual number, then it is also a left eigenvalue.In this case, this dual number is called an eigenvalue of that dual quaternion matrix.They showed that the right eigenvalues of a dual quaternion Hermitian matrix are dual numbers.Thus, they are eigenvalues.An n-by-n dual quaternion Hermitian matrix was shown to have exactly n eigenvalues.It is positive semidefinite, or positive definite, if and only if all of its eigenvalues are nonnegative, or positive and appreciable, dual numbers, respectively.A unitary decomposition for a dual quaternion Hermitian matrix was also proposed.In [19], it was shown that the eigenvalue theory of dual quaternion Hermitian matrices plays an important role in the multi-agent formation control.However, the numerical methods for computing the eigenvalues of a dual quaternion Hermitian matrix is blank.
The power method is one of the state of the art numerical approaches for computing eigenvalues, such as matrix eigenvalues [10], matrix sparse eigenvalues [26], tensor eigenvalues [11], nonnegative tensor eigenvalues [16], and quaternion matrix eigenvalues [14].In this paper, we propose a power method for computing the dominant eigenvalue of a dual quaternion Hermitian matrix.We first study the projections onto the set of unit dual quaternions and the set of dual quaternion vectors with unit norms.Then we propose a power method for computing the dual quaternion Hermitian matrix eigenvalues.This fills the blank of dual quaternion matrix computation.We also define the convergence of a dual number sequence and study the convergence properties of the power method we proposed.
Dual quaternion is a powerful tool for solving the SLAM problem.The SLAM problem aims to build a map of the environment and to simultaneously localize within this map, which is an essential skill for mobile robots navigating in unknown environments in absence of external referencing systems such as GPS.An intuitive way to solve the SLAM problem is via graph-based formulation [9].One line of work reformulates SLAM as a nonlinear least square and solves it by the Gauss-Newton method [12].However, as this problem is nonconvex and Gauss-Newton may be trapped in local minimal points, a good initialization is important [5].Several methods have been proposed to address the issue of global convergence, including the majorization minimization [8] and the Lagrangian duality [4].In this paper, we reformulate SLAM as a rank-one dual quaternion completion problem and propose a two-block coordinate descent method to solve it.One block subproblem has a closed form solution and the other block subproblem can be obtained by the dominant eigenpair.This connects the dual quaternion matrix theory with the SLAM problem.
The distribution of the remainder of this paper is as follows.In the next section, we review some basic properties of dual quaternions and dual quaternion matrices.We also define the convergence of a dual number sequence there.In Section 3, we show the projections onto the set of unit dual quaternions, and the projection onto the set of dual quaternion vectors with unit norms, can be obtained by normalization for appreciable numbers and vectors, respectively.In Section 4, we show that the sequence generated by the power method converges linearly to a strict dominant eigenvalue and eigenvector of a dual quaternion Hermitian matrix.For a general dominant eigenvalue, we show the convergence and the linear convergence rate of the standard part of the sequence to the standard part of the dominant eigenvalue and eigenvector.In Section 5, we present numerical experiment results on computing the eigenvalues of Laplacian matrices.In Section 6, we reformulate SLAM as a rank-one approximation model and present a block coordinate descent method for solving it.Some final remarks are made in Section 7.
2 Dual Quaternions and Dual Quaternion Matrices

Dual quaternions
The sets of real numbers, dual numbers, quaternion, unit quaternion, dual quaternion, and unit dual quaternion are denoted as R, D, Q, U, Q, and Û, respectively.Denote R 3 as V.
The multiplication of p and q is defined by pq = [p 0 q 0 − p • q, p 0 q + q 0 p + p × q], where p • q is the dot product, i.e., the inner product of p and q, with p • q ≡ p q = p 1 q 1 + p 2 q 2 + p 3 q 3 , and p × q is the cross product of p and q, with Thus, in general, pq = q p, and we have pq = q p if and only if p × q = 0, i.e., either p = 0 or q = 0, or p = α q for some real number α.
For a quaternion q = [q 0 , q 1 , q 2 , q 3 ] ∈ Q, its magnitude is defined by If |q| = 1, then it is called a unit quaternion.If p, q ∈ U, then pq ∈ U.For any q ∈ U, we have q q * = q * q = 1, i.e., q is invertible and q−1 = q * .A dual number a = a st + a I consists of the standard part a st ∈ R and the dual part a I ∈ R. The symbol is the infinitesimal unit.It satisfies = 0 and 2 = 0.If a st = 0, then a is appreciable, otherwise, a is infinitesimal [17].and the division of a and b, when a st = 0, or a st = 0 and b st = 0 is defined as where c ∈ R is an arbitrary real number.One may show that the division of dual numbers is the inverse operation of the multiplication of dual numbers.The zero element of D is 0 The absolute value of a ∈ D is defined in [17] as When a = 0 D , we have a |a| = sgn(a st ) if a st = 0, and a |a| = sgn(a The total order of dual numbers was defined in [17]: we say that a > b if a st > b st , or a st = b st and a I > b I .Let a > 0 D be a nonnegative dual number.
Then the square root of a is defined by [17] √ a = √ a st + a I 2 √ a st .
Let {a k = a k,st +a k,I : k = 1, 2, • • • } be a dual number sequence.We say that this sequence is convergent and has a limit a = a st +a I if both its standard part sequence {a k,st : k = 1, 2, • • • } and its dual part sequence {a k,I : k = 1, 2, • • • } are convergent and have limits a st and a I respectively.
Given a dual number sequence Clearly, in this case, if the real number sequence for a real positive number c < 1 and a polynomial h(k), then we denote c k = Õ(c k ).Similarly, we denote A dual quaternion q = qst +q I consists of two quaternions qst , the standard part of q, and qI , the dual part of q.We use a hat symbol to distinguish a dual quaternion.If qst = 0, then q is appreciable, otherwise, q is infinitesimal [17].
Let p = pst + pI , q = qst + qI ∈ Q.Then the sum of p and q is p + q = (p st + qst ) + (p I + qI ) and the product of p and q is pq = pst qst + (p st qI + pI qst ) .
A dual quaternion q = qst + qI is called a unit dual quaternion if The magnitude of a dual quaternion q ∈ Q is [17] Here, sc(q) = 1 2 (q + q * ) is the scalar part of q.Further, the 2 * -norm similar to [23] can be defined by For any dual quaternion number q = qst + qI and dual number a = a st + a I with a st = 0, or a st = 0 and qst = 0, there is qst + qI where c ∈ Q is an arbitrary quaternion number.

Dual quaternion matrices
We denote the set of m-by-n real, quaternion, unit quaternion, dual quaternion, and unit dual quaternion matrices as R m×n , Q m×n , U m×n , Qm×n , and Ûm×n , respectively.We use Õm×n and Ôm×n to denote the m-by-n zero quaternion and zero dual quaternion matrices, and Ĩm×m and Îm×m to denote the m-by-m identity quaternion and identity dual quaternion matrices, respectively.Given Q ∈ Qm×n .If Qst = Õm×n , then Q is appreciable, otherwise, Q is infinitesimal.
The 2-norm of a dual quaternion vector x = (x i ) ∈ Qn×1 is A vector x = (x i ) ∈ Ûn×1 is called a unit dual quaternion vector if each element in x is a unit dual quaternion number.The set of n × 1 quaternion vectors with unit 2-norms is denoted by Q n×1 2 .A matrix Q = (q i,j ) ∈ Ûm×n is called a unit dual quaternion matrix if each element in Q is a unit dual quaternion number.
Given a matrix Q = (q i,j ) ∈ Qm×n , the transpose of Q is Q = (q j,i ) ∈ Qn×m , and the conjugate transpose of Q is Q * = (q * j,i ) ∈ Qn×m .If Q = Q * , then it is a dual quaternion Hermitian matrix, and both its standard part and infinitesimal part are quaternion Hermitian matrices.
The F -norm of a dual quaternion matrix Q = ( Qij ) ∈ Qm×n is and the F * -norm is For convenience in the numerical experiments, we define the 2 R -norm of a dual quaternion vector x = (x i ) ∈ Qn×1 as and the F R -norm of a dual quaternion matrix Q = (q ij ) ∈ Qm×n as respectively.
3 Projection onto the Set of Dual Quaternions with Unit

Norms
Given a quaternion number q = 0 and a quaternion vector q = Õn×1 , we have Here, is the set of n dimensional quaternion vectors with unit 2-norms.In the following, we will show that for the dual quaternion number and the dual quaternion vector, the projection has a similar formulation.Theorem 3.1.Given a dual quaternion number q = qst + qI ∈ Q, we have the following conclusions.
(i) If qst = 0, the normalization of q is the projection of q onto the unit dual quaternion set.Namely, is a unit dual quaternion number and (ii) If qst = 0 and qI = 0, then the projection of q onto the unit dual quaternion set is û = ũst + ũI , where , ũI is any quaternion number satisfying sc(q * I ũI ) = 0. (5) Proof.(i) The normalization formula in equation ( 3) follows from q |q| = qst + qI By direct computations, we have ũ * st ũst = 1 and ũ * st ũI + ũ * and the definition of the total order of dual numbers, we have ũst = arg min Moreover, we have sc((ũ st − qst ) * (ṽ I − qI )) = −sc((ũ st − qst ) * qI ) for all ṽI satisfying sc(q * st ṽI ) = 0.In other words, any û with the standard part ũst = qst and the infinitesimal part satisfying sc(q * st ũI ) = 0 is an optimal solution.Hence, we conclude that û in ( 3) is an optimal solution of ( 4).
(ii) When qst = 0, we have for any v ∈ Û.By the definition of the total order of dual numbers, there is ũst = arg min ṽst ∈U −sc(ṽ * st qI ).
Hence, ũst = qI |q I | .As the objective function is independent of ṽI , the infinitesimal part of û can be any quaternion number satisfying sc(q * I ũI ) = 0.This completes the proof.
For problem (4), any ũI satisfying sc(q * st ũI ) = 0 is an optimal solution.However, the choice of ũI in (3) is geometric meaningful, as shown in the following proposition.Proposition 3.2.Suppose qst = 0. Then ũI defined by (3) is a solution to the following optimization problem, Furthermore, û = q |q| is the optimal solution to the following problem Proof.Problem ( 7) is a convex quaternion optimization problem [20] since the objective function is convex and the constraint is linear.Then it follows from the first-order optimality conditions given in Theorem 4.3 of [20], there is a Lagrange multiplier λ ∈ R such that ṽ − qI |q st | + λq st = 0 and sc(ṽ * qst ) = 0.
By multiplying q * st at the both sides of the first equation, there is λ = sc(q * st qI ) This completes the proof.
In the case that qst = 0 and qI = 0, the normalization of q is where ṽI can be any quaternion number.The standard part is the same as that of (5) and the dual part is different because (5) needs an additional condition sc(q * I ũI ) = 0. Similarly, we can show the projection onto the set of dual quaternion vectors with unit norms also have closed-form solutions.
Theorem 3.3.Given a dual quaternion vector q ∈ Qn×1 , we have (i) If qst = Õn×1 , the normalization of q is the projection of q onto the set of dual quaternion vectors with unit norms.Namely, is a dual quaternion vector with unit norm and (ii) If qst = Õn×1 , the projection of q onto the set of dual quaternion vectors with unit norms is û = ũst + ũI satisfying ũst = qI qI 2 , ũI is any quaternion satisfying sc(q * I ũI ) = 0.
Proof.The proof of this theorem is similar to the proof of Theorem 3.1.We do not repeat it here.
Similar to Proposition 3.2, we have the following result.Furthermore, û = q q 2 is the optimal solution to the following problem Proof.The proof is similar to that of Proposition 3.2 and we omit it here.

The Power Method for Computing the Dominant Eigenvalue of a Dual Quaternion Hermitian Matrix
For a quaternion matrix Q, the power method can return the eigenvalue with the maximum absolute value and its associated eigenvector [14].We now study the power method for dual quaternion Hermitian matrices.
where the second equality follows from that a dual number is commutative with a dual quaternion matrix.Hence, for a dual quaternion Hermitian matrix, the unit norm eigenvectors {û i } n i=1 , which form an orthonormal basis of Qn×1 , are not unique.
Suppose that we have a dual quaternion Hermitian matrix Q ∈ Qn×n .We say that an eigenvalue λ 1 of Q is a dominant eigenvalue of Q, and an eigenvector corresponding to λ 1 a dominant eigenvector, if for any eigenvalue λ j of Q, we have |λ 1,st | ≥ |λ j,st |, ∀ j = 2, . . ., n.
We say that an eigenvalue λ 1 of Q is a strict dominant eigenvalue of Q, with multiplicity l, if Q has eigenvalues λ j for j = 1, • • • , n, and they satisfy Assumption A The standard parts of the dominant eigenvalues have the same sign.
If Q has a strict dominant eigenvalue, then Q satisfies Assumption A. On the other hand, if Q does not satisfy Assumption A. Let P = Q + α În×n , where α is a nonzero real number.Then P must satisfy Assumption A, and λ is an eigenvalue of Q if and only if λ + α is an eigenvalue of P. Thus, it is adequate that we may assume that Q satisfies Assumption A.

Computing the strict dominant eigenvalues by the power method
If Qst = Õ, then at least one eigenvalue has nonzero standard part.Given any initial dual quaternion vector v(0) with unit norm, the power method computes iteratively.This process is repeated until convergent or the maximal iteration number is reached.
Algorithm 1 Power method for computing the dominant eigenvalues of a dual quaternion Hermitian matrix Require: The Hermitian matrix Q, the initial point v(0) , the maximal iteration number k max , and the tolerance δ.
Similar to the power method for the real matrices and quaternion matrices, we show v(k) converges to the eigenvector corresponding to a strict dominant eigenvalue linearly.has a strict dominant eigenvalue with multiplicity l, i.e., Then the sequence generated by the power method (9) satisfies where s = sgn(λ 1,st ), γj = αj √ l i=1 |α i | 2 , and In other words, the sequence v(k) s k converges to a strict dominant eigenvector where β j = λ −1 j,st λ j,I .If λ j,st = 0, we have λ k j = 0 D for all k ≥ 2. When k goes to infinity, we have where O D (•) is defined by (1).Hence, Here, γj = where ÕD (•) is defined by (2), and the second equality follows from λ k Consequently, (10) holds true and v(k) s k converges to l j=1 ûj γj .Since we have that l j=1 ûj γj is also an eigenvector corresponding to λ 1 .Furthermore, This completes the proof.

General dominant eigenvalues
In general, we may assume that Q satisfies Assumption A. Then we show the convergence of the standard part of the dominant eigenvalue and eigenvector, respectively.
Remark.In the general case, the dual parts of v(k) and ŷ(k) may not converge.In this case, Algorithm 1 only returns the standard parts of the strict dominant eigenvalue and its corresponding eigenvector.Denote the standard part of the eigenvector as ṽst = l j=1 ũj,st γj,st .By [18], we can compute the dual part of the dominant eigenvalue and its corresponding eigenvector via λ I = ṽ * st QI ṽst , Qst ṽI + QI ṽst = λ st ṽI + λ I ṽst and sc(ṽ * I ṽst ) = 0.
4.4 All appreciable eigenvalues of a dual quaternion Hermitian matrix By Theorem 7.1 in [15], the Eckart-Young-like theorem holds for dual quaternion matrices.If Q ∈ Qn×n is a dual quaternion Hermitian matrix, then by Theorem 4.1 in [18], Q can be rewritten as where Û = [û 1 , . . ., ûn ] ∈ Qn×n is a unitary matrix, Σ = diag(λ 1 , . . ., λ n ) ∈ D n×n is a diagonal dual matrix, and λ i , i = 1, . . ., n are in the descending order.Denote Qk = Q− k−1 i=1 λ i ûi û * i .Then λ k , ûk is the dominant eigenpair of Qk .If λ k,st = 0, λ k , ûk can be computed by implementing the power method on Qk .By repeating this process from k = 1 to n, we get all appreciable eigenvalues and their corresponding eigenvectors.
The process is summarized in Algorithm 2.

Algorithm 2 Computing all appreciable eigenvalues of a dual quaternion Hermitian matrix
Require: Q, the dimension n, and the tolerance γ.Q1 = Q.Proof.This lemma follows directly from ( 13) and Theorem 4.1.
Remark: From Lemma 4.3 and Theorem 6.1 in [18], we can also compute the singular value decomposition of a general dual quaternion matrix.Specifically, given a dual quaternion matrix Â ∈ Qm×n , Qi and Luo [18] showed that there exist dual quaternion unitary matrices V ∈ Qm×m and Û ∈ Qn×n such that where Σ is a diagonal matrix.
In addition, if Q is a rank-one matrix, then Q = λ 1 û1 û * 1 and Hence λ 1 , û1 is the optimal solution of ( 16) as it attains the global optimal value.This completes the proof.

Numerical Experiments for Computing Eigenvalues
In this section, we first show the numerical experiments for computing all appreciable eigenvalues of the Laplacian matrices of circles and random graphs.

Eigenvalues of Laplacian matrices of circles
In the multi-agent formation control, the Laplacian matrix of the mutual visibility graph plays a key role [19].Given a vector q ∈ Ûn×1 and a graph G = (V, E) with n vertices and m edges, the Laplacian matrix of G is defined as follows, where D is a diagonal real matrix where the i-th diagonal element is equal to the degree of the i-th vertex, and Â = (â ij ), âij = q * i qj , if (i, j) ∈ E, 0, otherwise.In the multi-agent formation control, qi ∈ Û is the configurations of the i-th rigid body, âij ∈ Û is the relative configuration of the rigid bodies i and j, and Â is the relative configuration adjacency matrix.Consider a five-point circle as shown in Fig. 1.Suppose q is a random unit dual quaternion vector as follows, .
We now compute all appreciable eigenvalues of L by Algorithm 2. After 23, 25, 1, and 1 iterations respectively, we get four eigenvalues 3.618, 3.618, 1.382, and 1.382.Then Algorithm 2 is stopped since the stopping criterion L5,st F ≤ 10 −6 × Lst F is satisfied.Here, L5 = L − 4 k=1 λ k ûk û * k .Further, L5,I is also close to a zero matrix.Hence, all five eigenvalues of L are: [3.6180 + 0 , 3.6180 + 0 , 1.3820 + 0 , 1.3820 + 0 , 0 + 0 ].The iterates of λ (k) and v(k) for computing the first and second eigenpair are shown in Figure 2. From this figure, we see that both the eigenvalue and the eigenvector converge linearly and the eigenvalue converges much faster than the eigenvector.Comparing the first and the second eigenpair, we see the convergence rate is the same.These conclusions are corresponding to our theory in Theorem 4.1 that v(k) converges to a strict dominant eigenvector at a rate    of ÕD (( 1.382 3.618 ) k ) and λ (k) converges to a strict dominant eigenvalue at a rate of ÕD (( 1.382 3.618 ) 2k ).We further consider the Laplacian matrix of circles with 3 to 10 points and list their eigenvalues and the number of iterations of the power method in Table 1.From this table, we see that all appreciable eigenvalues are real numbers and the smallest eigenvalues are all zeros.Further, the second smallest eigenvalue is monotonically decreasing with the dimension n.This is due to the fact that the second smallest eigenvalue is corresponding to the algebra connectivity of the graph.For each n, we see the eigenvalues with the same values consume almost the same number of iterations.This is corresponding to our theory that .Further, we see that the number of iterations is less when λ l+1,st λ 1,st is small.We also present the convergence rate of λ (k) and v(k) of the first and second eigenpairs for the six-point circle in Figure 3.We see that the sequences converge linearly.The power method for computing the first and the second eigenvalues converge in 76 and 20 iterations, respectively.This is corresponding to our theory that the sequences for computing the first and the second eigenvalues converge at a rate of ÕD In our numerical experiments, we find that the eigenvalues are the same for any q.Hence, we make the following conjecture.
Conjecture.The eigenvalues of the Laplacian matrix of a circle are all nonnegative real values, and they are independent of the choice of q.

Eigenvalues of Laplacian matrices of random graphs
We continue to consider the Laplacian matrix of a random graph G = (V, E).Assume that G is a sparse undirected graph and E is symmetric.The sparsity of G is s = m/n 2 , where m = |E| is the number of edges in E. In practice, we randomly generate m 2 edges and let E = {(i, j), (j, i) : if (i, j) is sampled}.We show the results with n = 10 and n = 100 in Table 2.All results are repeated ten times with different choices of q, different E, and different initial values in the power method and the average are reported.Denote the error in eigenvalues and the residue of the result matrix as where λ i and ûi are the limit points of λ (k) and v(k) for computing Li in Algorithm 1 and n 0 is the total number of eigenvalues obtained in Algorithm 2. In Table 2, we denote 'n iter ' as the average number of iterations and 'time (s)' as the average CPU time in seconds for computing one eigenvalue.
From Table 2, we see that when n = 10, e λ is less than 10 −7 and e L is less than 10 −14 .Similarly, when n = 100, the error in eigenvalues and the residue of the result matrices is less than 1.1 × 10 −3 and 10 −7 , respectively.All results are obtained in less than 1.1 seconds.This shows the efficiency of the power method in computing all appreciable eigenvalues.

The Dominant Eigenvalue Method for SLAM
In this section, we first reformulate the SLAM problem as a rank-one dual quaternion matrix recovery problem.Then we present a two-block coordinate approach for solving this model.The first block has a closed-form solution and the second block is a rank-one dual quaternion matrix decomposition problem whose solution can be obtained by the dominant eigenpair.At last, we present several numerical experiments to show the efficiency of our proposed method.Given a directed graph G = (V, E) with n vertices and m edges, where each vertex i ∈ V corresponds to a robot pose qi ∈ Û for i = 1, • • • , n, and each directed edge (arc) (i, j) ∈ E corresponds to a relative measurement qij ∈ Û.The aim is to find the best qi for i = 1, • • • , n, to satisfy [5] qij = (q i ) * qj .
The least square model aims to find x as follows, min The solution of ( 19) is not unique because xx * = (xq)(xq) * and xq ∈ Ûn×1 for any q ∈ Û.
Instead of solving ( 19) directly, we introduce the auxiliary variables X to solve the following problem min X∈ Qn×n Here, In the following, we show the equivalence between problems ( 19) and ( 20).
Theorem 6.1.The optimization problems (19) and (20) are equivalent in the following senses: (i) If x is an optimal solution of (19), then X = xx * is an optimal solution of (20).Furthermore, the optimal value of problems (19) and (20) are the same.
(ii) Let X be the optimal solution for (20).Then its rank-one decomposition X = λûû * satisfies λ > 0 D .Furthermore, x = √ λû is an optimal solution of (19), and the optimal value of problems (19) and (20) are the same.
Proof.We first show that there is a one-to-one correspondence between x ∈ Ûn×1 and X ∈ C 1 ∩ C 2 .On one hand, suppose that x ∈ Ûn×1 .Then it follows from direct derivations that On the other hand, suppose that X ∈ C 1 ∩ C 2 .Then it follows from (13) that there exists λ ∈ D and û * û = 1 such that X = λûû * .By x ii = λû i û * i = 1, we have λ st ũi,st ũ * i,st = 1.Hence, λ st > 0 and λ > 0 D .By [17], we have (23) we have X = xx * is a feasible solution of (20).Suppose that X is not an optimal solution.Denote the optimal solution as X .Then It follows from the above discussion, there also exists x ∈ Ûn×1 such that P F .This contradicts the optimality of x.
(ii) Denote x = √ λû.Then it is a feasible solution of (19).The optimality of x can be obtained by the contradiction method similarly.
This completes the proof.

A block coordinate descent method
We introduce the auxiliary variables X1 and X2 to solve the following problem min The quadratic penalty approach is applied to reformulate (24) as min Then we compute X1 and X2 alternatively by the block coordinate descent method [25].Specifically, at the k-th iteration, given ρ (k−1) , X(k−1) 1 and X(k−1)

2
, we compute X(k) 1 as follows, F , which has an explicit solution as x(k) 1,ii = 1 and for all i = j.Here, 1) , δ E,ij is equal to one if (i, j) ∈ E and zero otherwise, and P Û(•) is the projection onto the set of unit dual quaternion numbers defined by (3).where N is the noise matrix.The parameters of Algorithm 3 are given as follows.The penalty parameters ρ (0) = 0.01 and ρ 1 = 1.1.The maximal iteration is set to be 1000, and the parameter in the stopping criterion is β = 10 −5 .All experiments are repeated ten times with different choices of q, different noises, and different initial values in Algorithm 3. We only report the average performance.
Firstly, consider the five-point circle SLAM problem with noisy observation (30).The results for different noise levels are given in Table 3.Here, 'l noise ' is the relative noise level defined by , 'e x ' and 'e Q ' are the error measurements defined by From Table 3, we can see that Algorithm 3 can compute all problems in one minute with a reasonable error.Both the error in the vector x and the matrices Q are increasing with the noise levels.
We continue to consider the SLAM problem of random graphs.Suppose the number of vertices is n and the observation rate is s = m n 2 .The numerical results are shown in Table 4.When n = 10, all results can be obtained in less than one second.As the observation ratio increases, the error in x and Q are both decreasing and the number of iterations is decreasing.When the observation ratio is above 50%, then the error in both x and Q is near 10 −6 .When n = 100, we have similar observations and the error in both x and Q is near 10 −6 when

Final Remarks
In this paper, we proposed a power method for computing the dominant eigenvalue of a dual quaternion Hermitian matrix and showed the convergence and convergence rate.We used the Laplacian matrices from circles and random graphs to demonstrate the efficiency of the power method.Then we further studied the SLAM problem by a two blocks coordinate descent approach, where one block has a closed form solution and the other block can be obtained by the power method.Our results fill the blank of dual quaternion matrix computation and build relationship between the dual quaternion matrix theory and the SLAM problem.Adding the relation between the dual quaternion matrix theory and the formation control problem, established in [19], we see that the dual quaternion matrix has a solid application background and is worth being further studied.
Several issues remain to be solved.
1.When the multiplicity of the dominant eigenvalue is greater than one, and there exist at least two eigenvalues such that the dual parts are different, how to compute the eigenpairs numerically?
2. We suspect that the eigenvalues of the Laplacian matrix of a circle are all nonnegative real values, and they are independent of the choice of q.Does this hold?
3. How to build the convergence analysis of the block coordinate method for the dual quaternion optimization problem?

For a = a
st + a I , b = b st + b I ∈ D, the addition of a and b is defined as a + b = a st + b st + (a I + b I ) , the multiplication of a and b is defined as ab = a st b st + (a st b I + a I b st ) ,

4 . 4 . 2 Q
Let B = Â * Â and Ĉ = Â Â * .Then B = Û Σ2 Û * and Ĉ = V Σ2 V * are both dual quaternion Hermitian matrices.Hence, the singular value decomposition of Â can be obtained by the eigenvalue decompositions of B and Ĉ. 4.5 Relationship with the best rank-one approximation Lemma Given an appreciable dual quaternion Hermitian matrix Q ∈ Qn×n .The dominant eigenvalue and eigenvector formulate the best rank-one approximation of Q under the square of the F -norm, i.e., λ 1 , û1 ∈ arg min λ∈D,û∈ Qn − λûû * 2 F .

Figure 2 :
Figure 2: Convergence rate of power method for the Laplacian matrix of a five-point circle.

Figure 3 :
Figure 3: Convergence rate of power method for the Laplacian matrix of a six-point circle.

e x = ŷr − xr 2 R xr 2 R
and e Q = Q0 − λûû * F R Q0 F R , where xr = x xi |x i | , i = arg max i=1,...,n |x i |, ŷ = û√ λ, and ŷr = ŷ ŷi |ŷ i | .'n iter ' and 'time (s)' are the number of iterations and the total CPU time in seconds for implementing the block coordinate descent method in Algorithm 3 respectively.
n do Compute λ k , ûk as the dominant eigenpair of Qk by Algorithm 1.

Table 1 :
The eigenvalues (top row) for the Laplacian matrices of the circle and and the number of iterations (bottom row) of the power method.

Table 2 :
Numerical results of power method for computing all appreciable eigenvalues of Laplacian matrices of random graphs with different sparsity values.

Table 3 :
Numerical results for a five-point circle SLAM problem with different noise levels

Table 4 :
Numerical results for random graph SLAM problems with different observation ratios