Preface to the special issue on advances in continuous optimization on the occasion of EUROPT 2015
- First Online:
- 202 Downloads
This special issue of MMOR contains carefully peer-reviewed papers presented at the 13th EUROPT Workshop on Advances in Continuous Optimization which was held on July 8–10, 2015, in Edinburgh, organized by Jacek Gondzio and Julian Hall. The workshop is part of a series of annual events of EUROPT, the EURO working group on Continuous Optimization. EUROPT aims to promote and to facilitate communication links among researchers in continuous optimization.
The EUROPT workshop in Edinburgh was attended by 144 participants from 27 countries. The scientific program consisted of 34 contributed sessions, comprising 110 talks as well as four plenary talks by Serge Gratton, Sven Leyffer, Lieven Vandenberghe, as well as the EUROPT fellow of the year 2015, Panos Pardalos.
The focus of the present Special issue on advances in continuous optimization on the occasion of EUROPT 2015 is on continuous optimization theory, algorithms, software and applications, with special emphasis on Large-scale optimization and Linear algebra techniques in optimization. A brief overview of this special issue is as follows.
D. Aussel and S. Sagratella introduce a concept allowing to compute any solution of a quasivariational inequality via a variational inequality. They apply this technique to show how variational inequalities can be employed to determine the whole set of solutions for generalized Nash equilibrium problems.
J. Li, M. S. Andersen, and L. Vandenberghe study local and global convergence of the proximal Newton method with inexact search directions. They illustrate the benefits of their approach by an application to \(\ell _1\)-regularized covariance selection under prior constraints on the sparsity pattern of the inverse covariance matrix.
P. Armand and I. Lankoandé present a regularization algorithm to solve smooth unconstrained minimization problems when the Hessian may be singular at a local optimal point. Under a local error bound condition they prove superlinear convergence of their method.
L. F. Berti, A. R. L. Oliveira, and C. T. L. S. Ghidini propose a variation of the predictor corrector interior point method for linear programming. They use continued iteration to compute a new search direction, thereby reducing the overall computational cost required to solve a linear optimization problem.
R. Cambini, L. Carosi, L. Martein, and E. Valipour describe a new method to solve generalized fractional optimization problems, where the ratio of powers of affine functions is to be minimized over some polyhedron. They show that the particular structure of the objective function allows to provide a simplex-like algorithm, even when the objective function is not pseudoconcave.
D. Silva, M. Velazco, and A. Oliveira analyze the influence of sparse matrix reordering on the solution of linear systems arising from interior point methods for linear programming. Numerically they observe that reorderings by three particular heuristics can be advantageous in reducing the computational cost.
J. L. Redondo, J. Fernández, and P. M. Ortigosa, propose an evolutionary algorithm for treating nonlinear multi-objective optimization problems by quickly obtaining a fixed size Pareto front approximation. In accordance with theoretical considerations, the performed computational tests show that the new algorithm outperforms present state-of-the-art algorithms.
Finally, P. Mahey, J. Koko, and A. Lenoir consider long-term energy pricing in a production network with zones of production and transfer links. The coordination of production of all zones is achieved by two reformulations of the dynamic model which lead to different decomposition strategies. Numerical experiments on real-size dynamic models indicate that, among these, proximal decomposition is preferable.
As Editor-in-Chief of MMOR I would like to express my deepest gratitude to the authors for their high quality contributions, and to all the referees for their careful reading and substantial critical remarks which make this special issue an outstanding collection of papers on current trends in continuous optimization.