# Self-regular functions and new search directions for linear and semidefinite optimization

## Abstract.

In this paper, we introduce the notion of a *self-regular* function. Such a function is strongly convex and smooth coercive on its domain, the positive real axis. We show that any such function induces a so-called self-regular proximity function and a corresponding search direction for primal-dual path-following interior-point methods (IPMs) for solving linear optimization (LO) problems. It is proved that the new large-update IPMs enjoy a polynomial ?(*n*\(\frac{q+1}{2q}\)log\(\frac{n}{\varepsilon}\)) iteration bound, where *q*≥1 is the so-called barrier degree of the kernel function underlying the algorithm. The constant hidden in the ?-symbol depends on *q* and the growth degree *p*≥1 of the kernel function. When choosing the kernel function appropriately the new large-update IPMs have a polynomial ?(\(\sqrt{n}\)log*n*log\(\frac{n}{\varepsilon}\)) iteration bound, thus improving the currently best known bound for large-update methods by almost a factor \(\sqrt{n}\). Our unified analysis provides also the ?(\(\sqrt{n}\)log\(\frac{n}{\varepsilon}\)) best known iteration bound of small-update IPMs. At each iteration, we need to solve only one linear system. An extension of the above results to semidefinite optimization (SDO) is also presented.

## Preview

Unable to display preview. Download preview PDF.