Abstract.
We discuss convex optimization problems in which some of the variables are constrained to be finite autocorrelation sequences. Problems of this form arise in signal processing and communications, and we describe applications in filter design and system identification. Autocorrelation constraints in optimization problems are often approximated by sampling the corresponding power spectral density, which results in a set of linear inequalities. They can also be cast as linear matrix inequalities via the Kalman-Yakubovich-Popov lemma. The linear matrix inequality formulation is exact, and results in convex optimization problems that can be solved using interior-point methods for semidefinite programming. However, it has an important drawback: to represent an autocorrelation sequence of length $n$, it requires the introduction of a large number ($n(n+1)/2$) of auxiliary variables. This results in a high computational cost when general-purpose semidefinite programming solvers are used. We present a more efficient implementation based on duality and on interior-point methods for convex problems with generalized linear inequalities.
Access this article
We’re sorry, something doesn't seem to be working properly.
Please try refreshing the page. If that doesn't work, please contact support so we can address the problem.
Similar content being viewed by others
Author information
Authors and Affiliations
Additional information
Received: August 20, 2001 / Accepted: July 16, 2002 Published online: September 27, 2002
RID="★"
ID="★" This material is based upon work supported by the National Science Foundation under Grant No. ECS-9733450.
Rights and permissions
About this article
Cite this article
Alkire, B., Vandenberghe, L. Convex optimization problems involving finite autocorrelation sequences. Math. Program., Ser. A 93, 331–359 (2002). https://doi.org/10.1007/s10107-002-0334-x
Issue Date:
DOI: https://doi.org/10.1007/s10107-002-0334-x