A Riemannian rank-adaptive method for low-rank matrix completion

The low-rank matrix completion problem can be solved by Riemannian optimization on a fixed-rank manifold. However, a drawback of the known approaches is that the rank parameter has to be fixed a priori. In this paper, we consider the optimization problem on the set of bounded-rank matrices. We propose a Riemannian rank-adaptive method, which consists of fixed-rank optimization, rank increase step and rank reduction step. We explore its performance applied to the low-rank matrix completion problem. Numerical experiments on synthetic and real-world datasets illustrate that the proposed rank-adaptive method compares favorably with state-of-the-art algorithms. In addition, it shows that one can incorporate each aspect of this rank-adaptive framework separately into existing algorithms for the purpose of improving performance.


Introduction
The low-rank matrix completion problems have been extensively studied in recent years; see the survey [11]. The matrix completion model based on a Frobenius norm minimization over the manifold of fixed-rank matrices is formulated as follows,  where A ∈ R m×n is a data matrix only known on a subset Ω ⊂ {1, . . . , m} × {1, . . . , n}, k ≤ min(m, n) is a given rank parameter and P Ω : R m×n → R m×n denotes the projection onto Ω, i.e., [P Ω (X)] i,j = X i,j if (i, j) ∈ Ω, otherwise [P Ω (X)] i,j = 0. When the rank of data matrix A is known or prior estimated, this model has been shown to be effective for low-rank matrix completion; see [9,17]. However, the choice of the rank parameter k affects the speed of the methods that address (1) as well as the quality of the returned completion. In practice, if rank(A) ≥ k, solving problem (1) returns a low-rank solution on M k . When rank(A) < k, it is advisable to replace the constraint of problem (1) (X ∈ M k ) by X ∈ M s for s < k. Considering the inconvenience of rank estimation of A in general, a rank-adaptive algorithm with a flexible choice of rank parameter k is desirable.
We consider the following model for low-rank matrix completion: s. t. X ∈ M ≤k := {X ∈ R m×n : rank(X) ≤ k}.
Several algorithms based on Riemannian optimization (e.g., see [1]) for this problem have been developed in [13,15,12]. Recently, a Riemannian rank-adaptive method for low-rank optimization has been proposed in [19], and problem (2) can be viewed as a specific application. This rank-adaptive algorithm mainly consists of two steps: Riemannian optimization on the fixed-rank manifold and adaptive update of the rank. When a nearly rank-deficient point is detected, the algorithm reduces the rank to save computational cost. Alternatively, it increases the rank to gain accuracy. However, there are several parameters that users need to tune. In this paper, we propose a new Riemannian rank-adaptive method (RRAM). In comparison with the RRAM method in [19], we stray from convergence analysis concerns in order to focus on the efficiency of the proposed method for low-rank matrix completion. Specifically, the contributions are as follows.
-We adopt a Riemannian gradient method with non-monotone line search and Barzilai-Borwein step size to solve the optimization problem on the fixed-rank manifold. -By detecting the significant gap of singular values of iterates, we propose a novel rank reduction strategy such that the fixed-rank problem can be restricted to a dominant subspace. In addition, we propose a normal correction strategy to increase the rank. Note that the existing algorithms may benefit from these rankadaptive mechanisms to improve their numerical performance. -We demonstrate the effectiveness of the proposed method applied to low-rank matrix completion. The numerical experiments on synthetic and real-world datasets illustrate that the proposed rank-adaptive method is able to find the ground-truth rank and compares favorably with other state-of-the-art algorithms in terms of time efficiency.
The value of the new rank-adaptive method is application dependent. However, our observations provide insight on the kind of data matrices for which rank-adaptive mechanisms play a valuable role. The main purpose of this paper is thus to explore the numerical behaviors of rank-adaptive methods on synthetic and real-world problems.
The rest of paper is organized as follows. The next section introduces related work based on rank-update mechanisms, and presents necessary ingredients of the proposed method. In section 3, a new Riemannian rank-adaptive method is proposed and its implementation details are also provided. Numerical experiments are reported in section 4. The conclusion is drawn in section 5.

Related work and preliminaries
In this section, we start with related work and give the preliminaries regarding the geometric aspect.

Related work
The feasible set M k of problem (1) is a smooth submanifold of dimension (m + n − k)k embedded in R m×n ; see [8,Example 8.14]. A Riemannian conjugate gradient (RCG) method for solving problem (1) has been proposed in [17], which efficiently assembles ingredients of RCG by employing the low-rank structure of matrices. There has been other methods for the fixed-rank optimization including the Riemannian trust-region method (RTR) [10] and the Riemannian gradient-descent method (RGD) [19]. Mishra et al. [10] have considered a trace norm penalty model for low-rank matrix completion, and have proposed a method that alternately performs a fixed-rank optimization and a rank-one update.
However, M k is not closed in R m×n , hence min X∈M k f (X) may not have a solution even when f is continuous and coercive; moreover, if a Riemannian optimization algorithm has a limit point of rank less than k, then the classical convergence results in Riemannian optimization (e.g., [3]) do not provide guarantees about the limit point. As a remedy, one can resort to the set of bounded rank matrices, i.e., M ≤k . Recently, algorithms for solving problem (2), combining the fixed-rank Riemannian optimization with a rank-increase update, have been introduced in [13,15]. Basically, these methods increase the rank with a constant by searching along the tangent cone of M ≤k and projecting onto M k or M ≤k . In addition, a general projected line-search method on M ≤k has been developed in [12] whose convergence guarantee is based on the assumption that limit points of algorithm have rank k. In summary, we list all these rank-related algorithms with their corresponding features in Table 1; a detailed explanation will be given in Remark 1 (page 12) after we introduce the necessary geometric ingredients for Riemannian optimization.

Geometry of M ≤k
The geometry of M ≤k has been well studied in [12]. In this subsection, we introduce several Riemannian aspects that will be used in the rank-adaptive method.  [10] RTR X + = X + αwy ⊤ - [13] RCG X + = P M s+l (X + αG ≤(s+l) ) - [15] RCG Given the singular value decomposition (SVD) of fixed-rank matrices, an equivalent expression of the manifold M k is where St(k, m) := X ∈ R m×k : X ⊤ X = I k denotes the (compact) Stiefel manifold, and a diagonal matrix with {σ i } in its diagonal is denoted by diag(σ 1 , . . . , σ k ). This expression of M k provides a convenient way to assemble other geometric tools. For instance, the tangent space of M s at X ∈ M s is given as follows; see [17, Proposition 2.1] where U ⊥ ∈ R m×(m−s) denotes a matrix such that U ⊤ U ⊥ = 0 and U ⊤ ⊥ U ⊥ = I. Moreover, the normal space of M s at X associated with the Frobenius inner product, X, Y := tr(X ⊤ Y ), has the following form Letting P U := U U ⊤ and P ⊥ U := U ⊥ U ⊤ ⊥ = I − P U , the orthogonal projections onto the tangent space and normal space at X for Y ∈ R m×n are The Riemannian gradient of f at X ∈ M s , denoted by grad s f (X), is defined as the unique element in T X M s such that grad s f (X), Z = Df (X)[Z] for all Z ∈ T X M s , where Df (X) denotes the Fréchet derivative of f at X. It readily follows from [1, (3.37)] that where ∇f (X) denotes the Euclidean gradient of f at X.
When s < k, the tangent cone of M ≤k at X ∈ M s can be expressed as the orthogonal decomposition [12, Theorem 3.2] where ⊕ denotes the direct sum and is a subset of the normal space (T X M s ) ⊥ . Furthermore, the projection onto the tangent cone has the form [12, Corollary 3.3] Consequently, the optimality condition of problem (2) can be defined as follows; see [12,Corollary 3.4].
Definition 1 X * ∈ M ≤k is called a critical point of optimization problem (2) if G ≤k (X) F = 0.

A Riemannian rank-adaptive method
We propose a new Riemannian rank-adaptive algorithm in this section. The Riemannian Barzilai-Borwein method with a non-monotone line search is proposed to solve the fixed-rank optimization problem. Rank increase and rank reduction strategies are developed.

Algorithmic framework
In view of Definition 1, a critical point X * ∈ M ≤k of problem (2) satisfies where the first equality follows from the orthogonal decomposition (5) of G ≤k (X * ). This enlightens us to develop a two-stage algorithm: 1) solve the optimization problem on M s , i.e., min X∈Ms f (X).
Note that is the first-order optimality condition of (8); Flowchart of the Riemannian rank-adaptive method (RRAM); see (14) for the initialization, Algorithm 1 for the fixed-rank optimization, Algorithm 2 for the rank increase, and Algorithm 3 for the rank reduction, subsection 4.3 for the "increase rank" test, and subsection 4.2 for the "detect large gap" test.
To this end, a globally convergent Riemannian optimization algorithm on M s can be employed on the stage of fixed-rank optimization. After this, one can check the violation of optimality (7). If it is still large, which means the current rank s cannot reflect the adequate information for problem (2), then we increase the rank by searching along the normal space. In Figure 1, we draw the flowchart of the proposed rank-adaptive method (RRAM). There are three major parts in the framework: optimization on the fixed-rank manifold; rank increase; rank reduction. In the rest of this section, we address these three aspects.

Riemannian optimization on M s
Very recently, the Riemannian gradient method with non-monotone line search and Barzilai-Borwein (BB) step size [2] has been shown to be efficient in various applications; see [7,6,5]. In addition, its global convergence has been established (e.g., see [5,Theorem 5.6]). We adopt this method to solve the problem (8) in Figure 1. Given the initial pointX p ∈ M s , the detailed method called RBB is listed in Algorithm 1.
In step 2, we calculate a step size based on the Riemannian BB method [7], and the vector transport on M s is defined as The non-monotone line search is presented in step 3 and step 5. The metric projection P Ms is chosen as the retraction map in step 4, which can be calculated by a truncated SVD. Note that this projection is not necessarily unique, but we can always choose one in practice. All these calculations in Algorithm 1 can be efficiently achieved by exploiting the low-rank structure of X (j) = U ΣV ⊤ . The interested readers are referred to [17] for detailed implementations.
We terminate Algorithm 1 when the maximum iteration number j max is reached or other stopping criteria in section 4 are satisfied. Regarding the time efficiency of Algorithm 1, it often compares favorably with other methods for fixed-rank optimization; see numerical examples in subsection 4.1.

Rank increase
In the flowchart of RRAM (Figure 1), if s = k, we do not increase the rank. Otherwise, given X p ∈ M s , we check the condition If it holds, then it means that a significant part of G ≤k (X p ) is in the normal space to M s , in which case we consider that the current rank s is too small. In order to increase the rank of X p , we propose a "normal correction" step by searching along the normal space. Specifically, we consider a normal vector and choose the best rank-l approximation To verify the validity of "normal correction" step, we will need the following result.
Proof It follows from (4) and (6) that Therefore, any SVD of H(X) has the form U ⊥ŪΣV withσ 1 ≥ · · · ≥σ r > 0 and r = rank(H(X)). It follows that which is any compact SVD of H(X). Note that W DY ⊤ is a best rank-l approxi- Hence, we can perform a line-search As the objective function f (X) in (1) is quadratic, this problem has the closed-form solution Moreover, given the low-rank structure of X p = U ΣV ⊤ , it readily follows from Proposition 1 that has rank s +l. In addition, the SVD of X + , denoted by X + = U + Σ + V ⊤ + , can be directly assembled by sorting the columns of We summarize all these steps in Algorithm 2. In practice, the rank increase number l is set to a constant. A quick verification of rank increase is presented in Figure 2. We consider a method that combines the fixed-rank optimization (RBB) with the rank-one increase. Figure 2 reports the evolution of N k−s (X) F and G s (X) F for solving problem (2) with a rank-one initial point. It is observed that when the stage of fixed-rank optimization is finished, G s (X) F dramatically degrades while N k−s (X) F does not. Moreover, increasing the rank will lead to a reduction on N k−s (X) F , which verifies the effect of rank increase.

Rank reduction
As M s is not closed, an iterate sequence in M s may have limit points of rank less than s. If the iterates are found to approach M ≤(s−1) , then it makes sense to solve the optimization problem on the set of smaller fixed-rank matrices. In addition, it can reduce the dimension of problem and thereby save the memory.
In practice, we observe that the gap of singular values also plays an important role in fixed-rank optimization; see the numerical examples in subsection 4.2. Therefore, we propose to check if there is a large gap in the singular values and, if so, to decrease the rank accordingly. To this end, we consider a criterion based on the relative change of singular values where ∆ ∈ (0, 1) is a given threshold. Figure 3 presents two typical distributions of singular values, and σ i /σ 1 , (σ i − σ i+1 )/σ i are also computed. We observe that the large gap of singular values in the first row can be detected by (12) with setting ∆ = 0.1, while the condition σ i < σ 1 ∆ is not activated. In order to avoid losing too much information, we do not reduce the rank aggressively. The large gap of singular values is only detected by finding the indexr such thatr Note that if there is no large gap detected in terms of the threshold ∆, then we do not reduce the rank, i.e.,r = s. Otherwise, we choose the index that maximizes the gap (σ i − σ i+1 )/σ i . A benefit of taking maximum of (σ i − σ i+1 )/σ i is that we can exclude some undesirable cases that However, if we consider the smallest i such that (σ i − σ i+1 )/σ i > 0.1, we will drop all singular values except for the largest one, which is too aggressive. The proposed rank reduction step is listed in Algorithm 3.

Algorithm 3: Rank reduction
Input: Xp = U ΣV ⊤ ∈ Ms, ∆ ∈ (0, 1). Algorithm 3 is just one among many possible rank reduction strategies that retain the "largest" singular values and set the other ones to zero. The performance of those strategies is highly problem-dependent. Nevertheless, without such a strategy, the rank-adaptive method may miss opportunities to reduce the rank and thereby benefit from a reduced computational cost. Moreover, in order to address the issue, mentioned in subsection 2.1, that M k is not closed, some rank reduction mechanism is required to rule out sequences with infinitely many points in M >s and a limit point in M s . Table 1, the proposed rank-adaptive method in Figure 1 has several novel aspects. Firstly, as an inner iteration for the fixed-rank optimization, the RBB method with non-monotone line search tends to outperform other Riemannian methods such as RCG; see subsection 4.1 and 4.4. Secondly, we search along the normal space to increase the rank. This contrasts with [19], where the update direction is obtained by projecting the antigradient onto a tangent cone. Moreover, in contrast with [15, §III.A], we do not assume the fixed-rank algorithm (Algorithm 1) to return a point X p that satisfies G s (X p ) = 0; however, if it does, then the proposed rank increase step coincides with the update X + = X + αG ≤(s+l) (X) of [15] in view of (7). Thirdly, the proposed rank reduction mechanism differs from the one in [19], as explained in subsection 3.4. Its performance is illustrated in subsection 4.2.

Numerical experiments
In this section, we first demonstrate the effectiveness of the proposed rank-adaptive algorithm, and then compare it with other methods on low-rank matrix completion problems. For simplicity, we restrict our comparisons to manifold-based methods since they usually perform well on this problem and are comparable with other lowrank optimization methods; see [17].
Unless otherwise mentioned, the low-rank matrix (2) is generated by two random matrices L ∈ R m×r , R ∈ R n×r with i.i.d. standard normal distribution. The sample set Ω is randomly generated on {1, . . . , m} × {1, . . . , n} with the uniform distribution of |Ω| /(mn). The oversampling rate (OS, see [17]) for A is defined as the ratio of |Ω| to the dimension of M r , i.e., OS := |Ω| (m + n − r)r .
Note that OS represents the difficulty of recovering matrix A, and it should be larger than 1.
The stopping criteria for different algorithms are based on the relative gradient and relative residual of their solutions, also the relative change of function value. Specifically, relative gradient : Once one of the above criteria or the maximum iteration number 1000 is reached, we terminate the algorithms. Note that these criteria are introduced in [17]. The default tolerance parameters are chosen as ǫ g = 10 −12 , ǫ Ω = 10 −12 , ǫ f = 10 −4 . The rank increase parameter ǫ in (9) is set to 10, and the rank increase number l is 1. The rank reduction threshold ∆ in (13) is set to 0.1. The inner maximum iteration number j max is set to 100, and other parameters in Algorithm 1 are the same as those in [5]. All experiments are performed on a laptop with 2.7 GHz Dual-Core Intel i5 processor and 8GB of RAM running MATLAB R2016b under macOS 10.15.2. The code that produces the result is available from https://github.com/opt-gaobin/RRAM.

Comparison on the fixed-rank optimization
Before we test the rank-adaptive method, we first illustrate the performance of the RBB method proposed in Algorithm 1 on the fixed-rank optimization problem (1).
We compare RBB with a state-of-the-art fixed-rank method called LRGeomCG 1 [17], which is a Riemannian CG method for fixed-rank optimization.
The problem is generated with m = n = 10000, r = 40 and OS = 3. The rank parameter k in (1) is set to k = r, which means the true rank of A is provided. The initial point is generated by Figure 4 reports the numerical results. It is observed that the RBB method performs better than LRGeomCG in terms of time efficiency to achieve comparable accuracy. In addition, one can find the non-monotone property of RBB that stems from the non-monotone line search procedure in Algorithm 1.   Figure 5. Notice that RBB has less running time than LRGeomCG when the size of problem (m and k) increases. Additionally, we observe that RBB outperforms LRGeomCG among all different oversampling settings.

Comparison on the rank reduction
The effectiveness of the rank reduction step in RRAM is verified in this subsection. RRAM combines with the RBB method as the fixed-rank optimization, and we call it RRAM-RBB. For comparison, we also test LRGeomCG to illustrate that the rankadaptive method is more suitable than fixed-rank methods for low-rank matrix completion. We generate problem (2) with m = n = 1000 and OS = 3. The data matrix A = LR ⊤ is randomly generated by rank 10 matrices. The following comparison is twofold based on different initial guesses that have similar singular value distributions in Figure 3.
In a first set of experiments, the methods are initialized by (14), i.e., the best rank-k approximation of P Ω (A). Given the rank parameter k > rank(A) = 10, the distribution of this type of initial points is similar to the one in the first row of Figure 3, which has a large gap of singular values. We make a test on different rank parameters k chosen from the set {10, 11, . . . , 20}. The numerical results are presented in Figure 6, and observations are summarized as follows. -In Figure 6(a)-(b), it is observed that for LRGeomCG, the best choice of k is by far k = 10, which is the true rank of data matrix A. It reveals that the performance of the fixed-rank optimization method LRGeomCG highly depends on the choice of rank parameter, while the proposed rank-adaptive method has comparable results among all choices. -The update rank of RRAM is listed in Figure 6(c). Notice that the rank reduction step is invoked in the initialization stage of Figure 1 for most choices of the initial rank, and it reduces the rank to 10. In the cases of k = 14, 15, 16, although a initial rank reduction is not activated, the algorithm can detect the large gap of singular values when the first call of the fixed-rank method (Algorithm 1) terminates (at its maximum iteration number 100) and reduces k to the true rank 10. -It is worth mentioning that in Figure 6, when a rank reduction step completes, it often increases the function value at the very beginning, but the algorithm quickly converges once the true rank is detected.
Another class of initial points is randomly generated by a low-rank matrix LR ⊤ that has rank k. It has a uniform singular values distribution that is the same as the second row of Figure 3. Similarly, we compare RRAM-RBB with LRGeomCG on the problems with different rank parameters, and the results are reported in Figure 7. We observe that RRAM-RBB can reduce the rank among all choices of k > 10 even when the singular values of the initial point do not have a large gap. Note that in the cases of k = 15 and 18, the first fixed-rank optimization stops with the iteration number less than 100 since the relative change is achieved.

Comparison on the rank increase
In this subsection, we consider a class of problems for which the data matrix A is illconditioned. This type of problem has been numerically studied in [15]. Specifically, we construct where U ∈ R m×r and V ∈ R n×r . Note that A has exponentially decaying singular values. We generate the problem with m = n = 1000, k = r = 20 and OS = 3. The initial point is generated by (14). We choose the rank increase parameter ǫ = 2 such that RRAM-RBB is prone to increase the rank. The tolerance parameter ǫ g is set to 10 −15 . We test on three different settings: (I) the maximum iteration number j max for the fixed-rank optimization is set to 5, and the rank increase number l = 1; (II) j max = 100 and l = 1; (III) j max = 20 and l = 2. Figure 8 reports the evolution of errors and the update rank of RRAM-RBB. The observations are as follows.
-In this ill-conditioned problem, RRAM-RBB performs better than the fixed-rank optimization method LRGeomCG (k = 20). In addition, we observe that the rank reduction step is invoked at the initial point for three settings, and RRAM-RBB increases the rank by a number l after each fixed-rank optimization. -Note that the oscillation of relative gradient in RRAM-RBB stems from the rank increase step. From the first two columns of Figure 8, it is observed that if the fixed-rank problem is inexactly solved (j max = 5), the performance of RRAM-RBB is still comparable with the "exactly" solved algorithm (j max = 100).

Ablation comparison on the proposed rank-adaptive mechanism
In this subsection, we produce an ablation study by incorporating the framework of RRAM ( Figure 1) into the fixed-rank optimization LRGeomCG [17] (Riemannian CG method). The resulting algorithm is called RRAM-RCG. Note that RRAM-RCG and RRAM-RBB differ only for the inner iteration.
In the first test, we compare RRAM-RCG with RRAM-RBB on problem instances generated as in subsection 4.2, with the random initial guess described therein. Specifically, the problem is generated with m = n = 10000, k = 15, r = 10 and OS = 3. For both fixed-rank methods (RBB and RCG), the maximum iteration number j max is set to 100. In Figure 9, the numerical results illustrate that RCG still enjoys the benefit of the rank reduction step (13) that reduces the rank from 15 to the true rank 10. Moreover, it indicates that using RBB instead of RCG yields a consid-erable improvement on this problem instance. This observation can be explained by the comparison in subsection 4.1.  Another test is generated as in subsection 4.3 with m = n = 50000, k = r = 20 and OS = 3. Figure 10 reports the performance comparison of RRAM-RBB and RRAM-RCG. It shows that the proposed rank increase strategy is also effective for RCG. Notice that the performance of RRAM-RBB is slightly better than RRAM-RCG in terms of time efficiency.

Test on real-world datasets
In this subsection, we evaluate the performance of RRAM on low-rank matrix completion with real-world datasets. The MovieLens 2 dataset contains movie rating data from users on different movies. In the following experiments, we choose the dataset MovieLens100K that consists of 100000 ratings from 943 users on 1682 movies, and MovieLens 1M that consists of one million movie ratings from 6040 users on 3952 movies.
For comparison, we test RRAM-RBB with several state-of-the-art methods that particularly target low-rank matrix completion, namely, LRGeomCG 1 [17], NIHT 3 and CGIHT 3 [18], ASD 3 and ScaledASD 3 [14]. Note that all these methods are based on the fixed-rank problem where the rank parameter has to be given a priori. For these two real-world datasets, we randomly choose 80% of the known ratings as the training set and the rest as the test set. The rank parameter k is set to 10 for all tested algorithms, and we terminate these algorithms once the time budget is reached or their own stopping criteria are achieved.  Figure 11 shows the singular values of the initial point (14), namely the rank-10 approximation of the zero-filled MovieLens dataset. It is observed that the largest gap can be detected between the first two singular values by the rank reduction (13) for both examples. According to the rank-adaptive framework in Figure 1, RRAM-RBB will thereby reduce the rank to one just after the initialization (14), which explains the first rank reduction in the following figures for RRAM-RBB.
The numerical results are illustrated in Figure 12. Note that RRAM-RBB achieves the best final RMSE among all methods in the MovieLens100K dataset (m = 943, n = 1682), and is comparable with other algorithms in terms of time efficiency. The evolution of update rank of RRAM-RBB shows that RRAM-RBB adaptively increases the rank and automatically finds a rank that is lower than the rank given to the other methods but with a smaller RMSE. In the larger dataset MovieLens 1M (m = 6040, n = 3952), RRAM-RBB still has a comparable RMSE. In summary, the rank-adaptive method accepts the flexible choices of rank parameter, and is able to search for a suitable rank.

Conclusion
As the set of fixed-rank matrices is not closed, a rank-adaptive mechanism can be a promising way to solve optimization problems with rank constraints. This paper concerns the low-rank matrix completion problem that can be modeled with a boundedrank constraint. A Riemannian rank-adaptive method is proposed, featuring rank increase and decrease mechanisms that are novel in ways discussed in Remark 1. Numerical comparisons on synthetic and real-world data show that the proposed rankadaptive method compares favorably with other algorithms in low-rank matrix completion. This suggests that the proposed method might also perform well on other low-rank optimization problems, such as those mentioned in [12,19,4,16].