This special issue focuses on sparse and low-rank optimization, a new distinct area of research in optimization. A solution is sparse if it has very few nonzero entries (compared to its dimension) or possesses other kinds of simple structures, particularly, for example, low-rank matrices. Owing much to the studies of signal representation, compressive sensing, and regularized regression, sparse and low-rank optimization has been recognized as a computational tool that plays central roles in many data processing problems, especially those involving extremely large data. The development of sparse and low-rank optimization has been motivated by, and nurturing, the development in many other areas of data science.

This collection of selected papers covers recent theoretical and numerical advances in sparse and low-rank optimization, including first-order methods such as the alternating direction method of multipliers (ADMM), distributed consensus optimization, image reconstruction, etc. These papers are summarized as follows.

Jin Wang and Jian-Feng Cai consider a data-driven tight frame to model multi-channel images and apply it to recover color-depth images. They construct a discrete tight frame system for each image channel and assume that the sparse coefficients for the different channels are jointly sparse. Experimental results show that the proposed approach performs better than other state-of-the-art joint color and depth image reconstruction approaches.

In the paper by Xiao-Jing Ye, a primal-dual algorithm involving consensus constraints is proposed for a variety of non-smooth image reconstruction problems with large-scale and complex data. The paper focuses on the case where the data fidelity term can be decomposed into multiple relatively simple functions and deployed to parallel computing units, which will cooperatively solve a consensual solution of the original problem. Since the subproblems usually have closed form solutions or can be solved efficiently at local computing units, the per-iteration computation complexity is very low. Comprehensive convergence analysis of the algorithm, including its convergence rate, is performed.

William Hager, Cuong Ngo, Maryam Yashtini and Hong-Chao Zhang propose an alternating direction approximate Newton (ADAN) method for minimizing \(\phi (Bu)+(1/2)\Vert Au-f\Vert _2\), where \(\phi \) is convex and possibly nonsmooth, and \(A\) and \(B\) are matrices. The proposed algorithm is designed to handle applications where \(A\) is a large, dense, ill-conditioned matrix. The algorithm is based on ADMM and an approximation to Newton’s method in which a term in the Hessian formulation is replaced by a Barzilai–Borwein (BB) approximation. It is shown that ADAN converges to a solution to the problem. Numerical results are provided by parallel magnetic resonance imaging (PMRI) problems.

In the paper of Qian Dong, Xin Liu, Zai-Wen Wen and Ya-Xiang Yuan, a parallel subspace correction framework for composite convex optimization is developed. The variables are first divided into a few blocks. At each iteration, their approaches solve suitable subproblems simultaneously for all the blocks, construct a search direction by combining the subproblem solutions, and finally move to a new point along the direction with a step size satisfying the Armijo line search condition. The convergence of their approach is established. Numerical results show that the parallel subspace correction method with overlapping blocks of variables is helpful when the data of the problem have certain special structures.

An-Ya Lin and Qing Ling investigate decentralized and privacy-preserving low-rank matrix completion. A low-rank matrix \(D = [D_1,D_2,\ldots ,D_L]\) is recovered from a subset of its entries. In a network composed of \(L\) agents, each agent \(i\) observes some entries of \(D_i\). The unknown matrix \(D\) is factorized as \(D=XY,\) where \(X\) is a public matrix shared by all the agents and \(Y = [Y_1,Y_2,\ldots ,Y_L]\) where \(Y_i\) is privately held by agent \(i\). Each agent \(i\) updates \(Y_i\) and its local estimate of \(X\), denoted by \(X_{(i)}\), in an alternating manner. Periodically, all agents exchange information about their \(X_{(i)}\) with their neighbors so that their \(X_{(i)}\) will converge to a consensual estimate of \(X\). Finally, each agent \(i\) recovers the submatrix \(D_i = X_{(i)} Y_i\) of \(D\). In this process, communication may disclose a malicious agent certain information about \(D_i\) that is deemed sensitive. They show that if the network topology is properly designed and the agents run their proposed algorithm, D-LMaFit, then the malicious agent is unable to reconstruct the sensitive information.

Nonconvex sorted \(\ell _1\) minimization for sparse approximation is proposed by Xiao-Lin Huang, Lei Shi, and Ming Yan. Sorted \(\ell _1\) is a weighted \(\ell _1\) norm where smaller weights are assigned to components with larger magnitudes. Sorted \(\ell _1\) is nonconvex. The authors develop iteratively reweighted \(\ell _1\) and sorted thresholding methods for solving problems with sorted \(\ell _1\) penalty. Both methods are shown to converge to local minimizers. Their numerical results demonstrate better performance of sorted \(\ell _1\) than standard weighted \(\ell _1\).

Sheng-Long Zhou, Nai-Hua Xiu, Zi-Yan Luo and Ling-Chen Kong consider sparse and low-rank covariance matrix estimation. \(\ell _1\)-norm and nuclear norm penalties are added to promote sparsity and the low-rankness, respectively. They prove that with high probability, the Frobenius norm of the estimation rate has the order \(\mathcal {O}(\sqrt{(s\log p)/n})\), where \(s\) and \(p\) are the number of nonzero entries and the dimension of the population covariance, respectively, and \(n\) denotes the sample capacity. They then propose to solve this problem with ADMM and present results of their numerical simulations.

All the papers in this special issue have been peer-reviewed to the standard of the journal. We greatly appreciate the voluntary work and experted reviews of the anonymous referees.

We want to express our deep and sincere gratitude to all the authors, who have made tremendous contributions and offered generous support to this issue. Finally, we are grateful to Professor Ya-Xiang Yuan, the Editor-in-Chief of Journal of the Operations Research Society of China, who approved this special issue and provided us with guidance in the editorial process.

April, 2015