This special issue contains five papers dedicated to Nonlinear Continuous Optimization. All these contributions went through a strict and detailed refereeing process, and were revised according to the strict high standards of the journal.

The contributions span a diverse range of topics and applications. Numerical methods are presented and analyzed for optimization problems of the following specific types: (i) nonconvex and nonsmooth, (ii) smooth and box-constrained, (iii) large-scale and constrained, (iv) convex and nonsmooth and (v) infinite-dimensional and constrained. Applications come from image processing, portfolio optimization, inverse kinematics and optimal control.

In the paper “An inertial forward–backward algorithm for the minimization of the sum of two nonconvex functions,” Boţ, Csetnek, and László propose a forward–backward proximal-type algorithm with inertial/memory effects for minimizing the sum of a nonsmooth function with a smooth one in the nonconvex setting. They prove that every sequence of iterates converges to a critical point provided an appropriate regularization of the objective satisfies the Kurdyka–Łojasiewicz inequality, which is for instance fulfilled for semi-algebraic functions. They illustrate the theoretical results by considering two numerical experiments: the first one concerns the ability of recovering the local optimal solutions of nonconvex optimization problems, while the second one refers to the restoration of a noisy blurred image.

The paper “On the rate of convergence of the proximal alternating linearized minimization algorithm for convex problems”, by Shefi and Teboulle, analyzes the proximal alternating linearized minimization algorithm for solving nonsmooth convex minimization problems where the objective function is a sum of a smooth convex function and block-separable nonsmooth extended real-valued convex functions. For this method, the authors prove a global non-asymptotic sublinear rate of convergence. When the number of blocks is two and the smooth coupling function is quadratic the authors present a fast version of the method, which is proven to share a global sublinear rate efficiency estimate improved by a squared-root factor. Numerical examples illustrate the potential benefits of the proposed schemes.

Optimal control problems are constrained infinite-dimensional optimization problems for which numerical methods are constantly needed. In their paper “Dualization and discretization of linear-quadratic control problems with bang–bang solutions,” Alt, Kaya and Schneider propose to solve numerically the dual of control-constrained linear-quadratic optimal control problems. They derive the dual problem for a quadratic regularization of the primal problem, to circumvent difficulties induced by bang–bang type controls. Then, they propose a particular discretization scheme for the dual problem, under which convergence is assured. By means of an example, they illustrate that by solving the dual instead of the primal problem, significant computational savings can be achieved.

In real-life applications, formulation of problems is not separated from algorithmic decisions, and the applied mathematician should be involved in both processes. The paper “On the application of an augmented Lagrangian algorithm to some portfolio problems,” by Birgin and Martínez, considers some industrial finance problems to which this paradigm applies. The paradigm is studied with Algencan, freely available software that aims to solve smooth large-scale constrained optimization problems. When applied to specific problems, obtaining a good performance of this software in terms of efficacy and efficiency may depend on careful choices of options and parameters. The authors study these “user-dependent” choices through four portfolio optimization problems.

The paper “A modification of the alphaBB method for box-constrained optimization and an application to inverse kinematics,” by Eichfelder, Gerlach and Sumi, considers box-constrained optimization problems. The purpose of the paper is to find a representation of the whole optimal solution set with a predefined quality. Then, one element of this representation may be chosen based on additional information that cannot be formulated as a mathematical function or embedded within a hierarchical problem formulation. This paper presents such an application in the field of robotic design. This application problem can be modeled as a smooth box-constrained optimization problem. The authors extend the so-called alpha-BB method so that it can be used to find an approximation of the set of globally optimal solutions with a predefined quality. The properties of this modified alpha-BB method are illustrated and its finiteness and correctness is established.

We, the guest editors of this special issue, are deeply grateful to the Editor-in-Chief, Martine Labbé, who has helped and guided us through all stages of the editorial process.

We thank of course the authors, for their patience during the editorial process, and for their fantastic contributions.

We are grateful to all referees, for their expert and careful reading of the manuscripts, and their fundamental contribution to the high quality of the special issue.