The aim of this special issue is to focus on the growing interaction between inverse problems in imaging science and optimization, that in recent years has given rise to significant advances in both the areas: optimization-based tools have been developed to solve challenging image reconstruction problems while the experience with imaging problems has led to an improved and deeper understanding of certain optimization algorithms. The issue includes 10 peer reviewed papers whose contributions represent new advances in numerical optimization for inverse problems with significant impact in signal and image processing.

In a historical context, the first fundamental application of data inversion to imaging was the X-ray computed tomography (CT) invented by G.H. Hounsfield at the beginning of the seventies. It was a breakthrough in radiology, considered as the greatest achievement after the discovery of X-rays, and the first example of an imaging system where images are obtained from the acquired data by solving a mathematical problem. The success of this technique stimulated research in several directions. On one hand physicists and engineers developed new imaging techniques such as PET, SPECT and MR, on the other hand applied mathematicians developed research both on the specific mathematical problem of CT, namely Radon transform inversion, and on more general mathematical problems, known as Inverse Problems. In the application to imaging these problems arise whenever the object to be imaged can not be directly reached because it is in the interior of a body, as in medicine or geophysics, or it is too distant, as in astronomy, or it is too small as in microscopy. Therefore the image must be derived from data which can be measured and are related to the unknown object by some linear or nonlinear relationship.

A specific feature of the equations arising in the formulation of an inverse problem is that they are ill-posed in the sense of Hadamard: the solution may not be unique, may not exist or may not depend continuously on the data. A first approach for circumventing these difficulties was the Regularization Theory proposed by A.N. Tikhonov in the mid of the sixties of the last century: first, the original problem, formulated as the solution of a linear or nonlinear equation, is transformed into a least-square problem, i.e. one does not search for a solution which exactly reproduces the data but for a solution which has a minimum distance, in some metrics, from the data; next, since in general this least-square problem is still ill-posed, the solution is “regularized” by adding to the least-square functional a suitable penalty term enforcing existence, uniqueness and regularity of the minimizer. Regularization Theory was the main research topic for decades and was extended at the beginning of the eighties by considering not only least-square approaches but more general maximum likelihood or Bayesian approaches.

In conclusion, Inverse Problems are now formulated as variational problems with an objective functional which consists of two terms: the first depends on the data and on the unknown of the problem, it represents a distance or a divergence of the computed data from the measured ones and is often called data fidelity functional; the second is a penalty term enforcing properties of the solution as regularity, sparsity, edges preserving etc. and is usually called regularization functional. This second term is multiplied by a parameter balancing the relative weight of the two terms, known as regularization parameter (sometimes hyper-parameter if one is focusing on its statistical meaning). The choice of criteria for an “optimal” choice of this parameter is another important topic in Inverse Problems theory.

In real applications, the data are discrete and also the unknown solution must be sampled; therefore the computation of the solution of an inverse problem is a problem of numerical optimization that, in the case of imaging, can involve millions of variables. Thus, the efficiency of the optimization algorithms becomes a crucial issue and the interaction between inverse problems and numerical optimization is quite natural and necessary. The selected papers provide contributions in three main areas: the reengineering of classical optimization schemes for imaging problems, the development of new methods based on sophisticated tools from variational analysis and the design of new mathematical models for challenging inverse problems applications.

As regards the review of classical methods, in the paper by Chen and Gui the analysis of the convergence properties of gradient projection methods for constrained optimization leads to conditions for ensuring successful performance for total variation image reconstruction, while Setzer, Steidl and Morgenthaler show that suited superstep cycles can speed up the fixed step versions of standard gradient projection approaches. Another classical scheme widely studied for imaging problems is the alternating direction method: in the paper by Chen, Hager, Yashtini, Ye and Zhang, a Bregman operator splitting algorithm with variable stepsize is introduced for improving total variation image reconstruction with application to partially parallel magnetic resonance imaging. Special versions of the alternating direction method are developed in the paper by Xiao, Zhu and Wu to minimize a convex non-smooth 1 1-norm function for sparse signal reconstruction and in the paper by Han, Yuan, Zhang and Cai for solving a linearly constrained convex programming with a block-separable structure. The last problem arises in some applications of statistics and image processing when we have to recover low-rank and sparse components of matrices from incomplete and noisy observations. In the paper by Bot and Hendrich, powerful tools of convex analysis enable to address a general unconstrained nondifferentiable convex optimization problem: its Fenchel dual problem is considered and regularized in two steps into a differentiable strongly convex one with Lipschitz continuous gradient, that can be efficiently solved via a fast gradient method. In the paper by Lenzen, Becker, Lellmann, Petra and Schnörr, the focus is a class of adaptive non-smooth convex variational problems for image denoising. The adaptivity is introduced in the regularization term and it is modeled by a set-valued mapping with closed, compact and convex values. This extension gives rise to a class of quasi-variational inequalities, that are analyzed to devise conditions for the existence of solutions and to develop a suitable algorithmic framework for the numerical solution. In the paper by Brianzi, Di Benedetto and Estatico, the formulation of classical regularization methods in Banach spaces is investigated and, thanks to the extension of preconditioning techniques previously proposed for Hilbert spaces, effective strategies for improving the quality of the reconstructed images are introduced. A method for the minimization of a 1-norm penalized least squares functional subject to linear equality constraints is presented and evaluated on a synthetic problem in magneto-encephalography by Loris and Verhoeven; the proposed method combines ideas from the generalized iterative soft-thresholding and the basis pursuit algorithms into a single unified approach. In the paper by Bergounioux and Piffet, the problems of image denoising and texture extraction are faced by a second order decomposition model that overcomes drawbacks of previous first and second-order total variation models. The model is analyzed from both the theoretical and numerical point of view and some experiments are performed for emphasizing its usefulness in practical imaging applications.

The guest editors are indebted to the authors of the special issue and to the referees who took care to review all the submitted papers. Finally, we wish to thank William Hager, editor in chief of Computational Optimization and Applications for the useful suggestions as well as for the publication of this special issue.