Abstract
We propose a completesearch algorithm for solving a class of nonconvex, possibly infinitedimensional, optimization problems to global optimality. We assume that the optimization variables are in a bounded subset of a Hilbert space, and we determine worstcase runtime bounds for the algorithm under certain regularity conditions of the cost functional and the constraint set. Because these runtime bounds are independent of the number of optimization variables and, in particular, are valid for optimization problems with infinitely many optimization variables, we prove that the algorithm converges to an \(\varepsilon \)suboptimal global solution within finite runtime for any given termination tolerance \(\varepsilon > 0\). Finally, we illustrate these results for a problem of calculus of variations.
Introduction
Infinitedimensional optimization problems arise in many research fields, including optimal control [7, 8, 24, 54], optimization with partial differential equations (PDE) embedded [22], and shape/topology optimization [5]. In practice, these problems are often solved approximately by applying discretization techniques; the original infinitedimensional problem is replaced by a finitedimensional approximation that can then be tackled using standard optimization techniques. However, the resulting discretized optimization problems may comprise a large number of optimization variables, which grows unbounded as the accuracy of the approximation is refined. Unfortunately, worstcase runtime bounds for completesearch algorithms in nonlinear programming (NLP) scale poorly with the number of optimization variables. For instance, the worstcase runtime of spatial branchandbound [17, 44] scales exponentially with the number of optimization variables. In contrast, algorithms for solving convex optimization problems in polynomial runtime are known [11, 40], e.g. in linear programming (LP) or convex quadratic programming (QP). While these efficient algorithms enable the solution of very largescale convex optimization problems, such as structured or sparse problems, in general their worstcase runtime bounds also grow unbounded as the number of decision variables tends to infinity.
Existing theory and algorithms that directly analyze and exploit the infinitedimensional nature of an optimization problem are mainly found in the field of convex optimization. For the most part, these algorithms rely on duality in convex optimization in order to construct upper and lower bounds on the optimal solution value, although establishing strong duality in infinitedimensional problems can prove difficult. In this context, infinitedimensional linear programming problems have been analyzed thoroughly [3]. A variety of algorithms are also available for dealing with convex infinitedimensional optimization problems, some of which have been analyzed in generic Banach spaces [14], as well as certain tailored algorithms for continuous linear programming [4, 13, 32].
In the field of nonconvex optimization, problems with an infinite number of variables are typically studied in a local neighborhood of a stationary point. For instance, local optimality in continuoustime optimal control problems can be analyzed by using Pontryagin’s maximum principle [46], and a number of local optimal control algorithms are based on this analysis [6, 12, 51, 54]. More generally, approaches in the classical field of variational analysis [37] rely on local analysis concepts, from which information about global extrema may not be derived in general. In fact, nonconvex infinitedimensional optimization remains an open field of research and, to the best of our knowledge, there currently are no generic completesearch algorithms for solving such problems to global optimality.
This paper asks the question whether a global optimization algorithm can be constructed, whose worstcase runtime complexity is independent of the number of optimization variables thereof, such that this algorithm would remain tractable for infinitedimensional optimization problems. Clearly, devising such an algorithm may only be possible for a certain class of optimization problems. Interestingly, the fact that the “complexity” or “hardness” of an optimization problem does not necessarily depend on the number of optimization variables has been observed—and it is in fact exploited—in stateoftheart global optimization solvers for NLP/MINLP, although these observations are still to be analyzed in full detail. For instance, instead of applying a branchandbound algorithm in the original space of optimization variables, global NLP/MINLP solvers such as BARON [49, 52] or ANTIGONE [34] proceed by lifting the problem to a higherdimensional space via the introduction of auxiliary variables from the DAG decomposition of the objective and constraint functions. In a different context, the solution of a lifted problem in a higherdimensional space has become popular in numerical optimal control, where the socalled multipleshooting methods often outperform their singleshooting counterparts despite the fact that the former calls for the solution a largerscale (discretized) NLP problem [7, 8]. This idea that certain optimization problems become easier to solve than equivalent problems in fewer variables is also central to the work on lifted Newton methods [2]. To the best of our knowledge, such behaviors cannot be explained currently with results from the field of complexity analysis, which typically give monotonically increasing worstcase runtime bounds as the number of optimization variables increases. In this respect, these runtime bounds therefore predict the opposite behavior to what can sometimes be observed in practice.
Problem formulation
The focus of this paper is on completesearch algorithms for solving nonconvex optimization problems of the form:
where \(F: H \rightarrow \mathbb R\) and \(C \subseteq H\) denote the cost functional and the constraint set, respectively; and the domain H of this problem is a (possibly infinitedimensional) Hilbert space with respect to the inner product \(\langle \cdot ,\cdot \rangle : H \times H \rightarrow \mathbb R\). The theoretical considerations in the paper do not assume a separable Hilbert space, although our various illustrating examples are based on separable spaces.
Definition 1
A feasible point \(x^* \in C\) is said to be an \(\varepsilon \)suboptimal global solution—or \(\varepsilon \)global optimum–of (1), with \(\varepsilon > 0\), if
We make the following assumptions regarding the geometry of C throughout the paper.
Assumption 1
The constraint set C is convex, has a nonempty relative interior, and is bounded with respect to the induced norm on H; that is, there exists a constant \(\gamma <\infty \) such that
Our main objective in this paper is to develop an algorithm that can locate an \(\varepsilon \)suboptimal global optimum of Problem (1), in finite runtime for any given accuracy \(\varepsilon >0\), provided that F satisfies certain regularity conditions alongside Assumption 1.
Remark 1
Certain infinitedimensional optimization problems are formulated in a Banach space \((B,\Vert \cdot \Vert )\) rather than a Hilbert space, for instance in the field of optimal control of partial differential equations in order to analyze the existence of extrema [22]. The optimization problem (1) becomes
with \(\hat{F}: B \rightarrow \mathbb R\) and \(\hat{C}\) a convex bounded subset of B. Provided that:

1.
the Hilbert space \(H\subseteq B\) is convex and dense in \((B,\Vert \cdot \Vert )\);

2.
the function \(\hat{F}\) is upper semicontinuous in \(\hat{C}\); and

3.
the constraint set \(\hat{C}\) has a nonempty relative interior;
we may nonetheless consider Problem (1) with \(C := \hat{C} \cap H\) instead of (2), for any \(\varepsilon \)suboptimal global solution of the former is also an \(\varepsilon \)suboptimal global solution of (2), and both problems have such \(\varepsilon \)suboptimal points. Because Conditions 1–3 are often satisfied in practical applications, it is for the purpose of this paper not restrictive to assume that the domain of the optimization variables is indeed a Hilbert space.
Outline and contributions
The paper starts by discussing several regularity conditions for sets and functionals defined in a Hilbert space in Sect. 2, based on which completesearch algorithms can be constructed whose runtime is independent of the number of optimization variables. Such an algorithm is presented in Sect. 3 and analyzed in Sect. 4, which constitutes the main contributions and novelty. A numerical case study is presented in Sect. 5 in order to illustrate the main results, before concluding the paper in Sect. 6.
Although certain of these algorithmic ideas are inspired by a recent paper on global optimal control [25], we develop herein a much more general framework for optimization in Hilbert space. Besides, Sect. 4 derives novel worstcase complexity estimates for the proposed algorithm. We argue that these ideas could help lay the foundations towards new ways of analyzing the complexity of certain optimization problems based on their structural properties rather than their number of optimization variables. Although the runtime estimates for the proposed algorithm remain conservative, they indicate that complexity in numerical optimization does not necessarily depend on whether the problem at hand being smallscale, largescale, or even infinitedimensional.
Some regularity conditions for sets and functionals in Hilbert space
This section builds upon basic concepts in infinitedimensional Hilbert spaces in order to arrive at certain regularity conditions for sets and functionals defined in such spaces. Our focusing on Hilbert space is motivated by the ability to construct an orthogonal basis \(\varPhi _0, \varPhi _1, \ldots \in H\) such that
for some scalars \(\sigma _0, \sigma _1, \ldots \in \mathbb R^{++}\). We make the following assumption throughout the paper:
Assumption 2
The basis functions \(\varPhi _k\) are uniformly bounded with respect to \(\Vert \cdot \Vert _H\).
Equipped with such a basis, we can define the associated projection functions \(P_M: H \rightarrow H\) for each \(M \in \mathbb N\) as
A natural question to ask at this point, is what can be said about the distance between an element \(x \in H\) and its projection \(P_M(x)\) for a given \(M\in \mathbb N\).
Definition 2
We call \(D(M,x) \; := \; \left\ \, x  P_M(x) \, \right\ _H\) the distance between an element \(x \in H\) and its projection \(P_M(x)\). Moreover, given the constraint set \(C\subseteq H\), we define
Lemma 1
Under Assumption 1, the function \(\overline{D}_C:\mathbb N \rightarrow \mathbb R\) is uniformly bounded from above by \(\gamma \).
Proof
For each \(M\in \mathbb N\), we have
The result follows by Assumption 1. \(\square \)
Despite being uniformly bounded, the function \(\overline{D}_C(M)\) may not converge to zero as \(M \rightarrow \infty \) in an infinitedimensional Hilbert space in general. Such lack of convergence is illustrated in the following example.
Example 1
Consider the case that all the basis functions \(\varPhi _0,\varPhi _1,\ldots \) are in the constraint set C, and define the sequence \(\{x_k\}_{k\in \mathbb N}\) with \(x_k:=\varPhi _{k+1}\). For all \(k\in \mathbb N\), we have \(P_k(x_k)=0\), and therefore
\(\square \)
This behavior is unfortunate because the existence of minimizers to Problem (1) cannot be ascertained without making further regularity assumptions. Moreover, for a sequence \((x_k)_{k \in \mathbb N}\) of feasible points of Problem (1) converging to an infimum, it could be that
That is, any attempt to approximate the infimum by constructing a sequence of finite parameterizations of the optimization variable x could in principle be unsuccessful.
A principal aim of the following sections is to develop an optimization algorithm, whose convergence to an \(\varepsilon \)global optimum of Problem (1) can be certified. But instead of making assumptions about the existence, or even the regularity, of the minimizers of Problem (1), we shall impose suitable regularity conditions on the objective function F in (1). In preparation for this analysis, we start by formalizing a particular notion of regularity for the elements of H.
Definition 3
An element \(g \in H\) is said to be regular for the constraint set C if
Moreover, we call the function \(R_C(\cdot ,g): \mathbb N \rightarrow \mathbb R^+\) the convergence rate at g on C.
Theorem 1
For any \(g \in H\), we have
In the particular case of g being a regular element for C, we have
Proof
Let \(M\in \mathbb N\), and consider the optimization problem
where we have introduced the variable \(w := x  P_M(x)\) such that
Since the functions \(\varPhi _0, \ldots , \varPhi _M\) are orthogonal to each other, we have \(\langle \varPhi _k , w \rangle = 0\) for all \(k \in \{ 0, \ldots , M\}\), and it follows that
Next, we use duality to obtain
where \(\lambda \in \mathbb R^{M+1}\) are multipliers associated with the constraints \(\langle \varPhi _k , w \rangle = 0\) for \(k \in \{ 0, \ldots , M \}\). Applying the CauchySchwarz inequality gives
and with the particular choice \(\lambda _k^* := \frac{\langle g , \varPhi _k \rangle }{\sigma _k}\) for each \(k\in \{0,\ldots ,M\}\), we have
The optimal value of the minimization problem
can be estimated analogously, giving \(\underline{V}_M\ge R_C(M,g)\), and the result follows. \(\square \)
The following example establishes the regularity of piecewise smooth functions with a finite number of singularities in the Hilbert space of squareintegrable functions with the Legendre polynomials as orthogonal basis functions.
Example 2
We consider the Hilbert space \(H = L_2[0,1]\) of standard squareintegrable functions on the interval [0, 1] equipped with the standard inner product, \(\langle f,g\rangle :=\int _0^1 f(s)g(s)\mathrm{d}s\), and we choose the Legendre polynomials on the interval [0, 1] with weighting factors \(\sigma _k = \frac{1}{2k+1}\) as orthogonal basis functions \((\varPhi _k)_{k \in \mathbb N}\). Our focus is on piecewise smooth functions \(g: [0,1] \rightarrow \mathbb R\) with a given finite number of singularities, for which we want to establish regularity in the sense of Definition 3 for a bounded constraint set \(C\subset L_2[0,1]\).
There are numerous results on approximating functions using polynomials, including convergence rate estimates [15]. One such result in [48] shows that any piecewise smooth function \(f: [0,1] \rightarrow \mathbb R\) can be approximated with a polynomial \(p_f^M: [0,1] \rightarrow \mathbb R\) of degree M such that
for any given \(\alpha ,\beta > 0\) with either \(\alpha < 1\) and \(\beta \ge \alpha \), or \(\alpha = 1\) and \(\beta > 1\); some constants \(K_1,K_2 > 0\); and where d(y) denotes the distance to the nearest singularity. In particular, the following convergence rate estimate can be derived using this result in the present example, for any piecewise smooth functions \(g: [0,1] \rightarrow \mathbb R\) with a finite number of singularities:
for some constant \(K < \infty \). In order to establish the very last part of the above inequality, it is enough to consider a function g with a single singularity, e.g., at the midpoint \(y = \frac{1}{2}\), and using \(\alpha = \beta = \frac{1}{2}\):^{Footnote 1}
Convergence rate estimates for ktimes differentiable and piecewise smooth functions can be obtained in a similar way, using for instance the results in [15, 48]. \(\square \)
A useful generalization of Definition 3 and a corollary of Theorem 1 are given below.
Definition 4
A set \(G\subseteq H\) is said to be regular for C if
Moreover, we call the function \(\overline{R}_C(\cdot ,G):\mathbb N\rightarrow \mathbb R^+\) the worstcase convergence rate for G on C.
Corollary 1
For any regular set \(G \subseteq H\), we have
Remark 2
While any subset of the Euclidean space \(\mathbb R^n\) is trivially regular for a given bounded subset \(C\subset \mathbb R^n\), only certain subsets/subspaces of an infinitedimensional Hilbert space happen to be regular. Consider for instance the space of squareintegrable functions, \(H:= L_2[a,b]\), and let \(G^p\) be any subset of ptimes differentiable functions on [a, b], with uniformly Lipschitzcontinuous pth derivatives. It can be shown—e.g., from the analysis in [27] using the standard trigonometric Fourier basis, or from the results in [55] using the Legendre polynomial basis—that
for any bounded constraint set \(C\subset L_2[a,b]\), and \(G^p\) is thereby regular for C. This leads to a rather typical situation, whereby the stronger the regularity assumptions on the function class, the faster the convergence of the associated worstcase convergence rate \(R(\cdot ,G^p)\)—an increase in the convergence rate order \(\log (M)M^{p1}\) with p in this instance. In the limit of smooth (\(\mathscr {C}^\infty \)) functions, it can even be established—e.g., using standard results from Fourier analysis [19, 28]—that the convergence rate becomes exponential,
Example 2
(Continued) Consider the following set of unitstep functions
for which we want to establish regularity in the sense of Definition 4. Using earlier results in Example 2, it is known that the function \(x_{0.5}\) can be approximated with a sequence of polynomials \(p_{0.5}^M: [0,1] \rightarrow \mathbb R\) of degree M such that
For every \(t \in [0,1]\) likewise, we can construct the family of polynomials
Since the latter satisfy the same property as \(x_{0.5}\) that
where the constant \(K < \infty \) is independent of t or M, we have \(\overline{R}_C(M,G_t) \le \mathbf {O}\left( M^{1/2} \right) \).
This example can be generalized to other classes of functions. For instance, given any smooth function \(f \in L_2[0,1]\), the subset
is regular in H, and also satisfies \(\overline{R}_C(M,G_f) \le \mathbf {O}\left( M^{1/2}\right) \). This result can be established by writing the elements in \(G_f\) as the product between the piecewise smooth function f and the function \(x_t\), and then approximating the factors separately. \(\square \)
In the remainder of this section, we analyze and illustrate a regularity condition for the cost functional in Problem (1).
Definition 5
The functional \(F: H \rightarrow \mathbb R\) is said to be strongly Lipschitzcontinuous on C if there exists a bounded subset \(G \subset H\) which is regular on C and a constant \(L < \infty \) such that
Remark 3
In the special case of an affine functional F, given by
where \(F_0 \in H\), and \(\hat{g}\in H\) is a regular element for C, the condition (7) is trivially satisfied with \(L = 1\) and \(G = \{ \hat{g} \}\). In this interpretation, the regularity condition (7) essentially provides a means of keeping the nonlinear part of F under control.
Remark 4
Consider the finitedimensional Euclidean space \(\mathbb R^{n}\), a bounded subset \(S\subset \mathbb {R}^n\), and a continuouslydifferentiable function \(F: \mathbb R^n \rightarrow \mathbb R\) whose first derivative is bounded in the subset \(G \subset \mathbb R^n\). By the meanvalue theorem, F satisfies
Thus, any continuously differentiable function with a bounded first derivative is strongly Lipschitzcontinuous on any bounded subset of \(\mathbb R^n\). This result can be generalized to certain classes of functionals in infinitedimensional Hilbert space. For instance, let \(F: H \rightarrow \mathbb R\) be Fréchet differentiable, such that
and let the set of Fréchet derivatives \(G := \{ DF(x)\,\mid \, x\in H \}\subseteq H\) be both bounded and regular on C. Then, F is strongly Lipschitzcontinuous on C.
The following two examples investigate strong Lipschitz continuity for certain classes of functionals in the practical space of squareintegrable functions with the Legendre polynomials as orthogonal basis functions. The first one (Example 3) illustrates the case of a functional that is not strongly Lipschitzcontinuous; the second one (Example 4) identifies a broad class of strongly Lipschitzcontinuous functionals defined via the solution of an embedded ODE system. The intention here is to help the reader develop an intuition that strongly Lipschitzcontinuous functionals occur naturally in many, although not all, problems of practical relevance.
Example 3
We consider the Hilbert space \(H = L_2[0,1]\) of squareintegrable functions on the interval [0, 1] with the standard inner product, and select the orthogonal basis functions \((\varPhi _k)_{k \in \mathbb N}\) as the Legendre polynomials on the interval [0, 1] with weighting factors \(\sigma _k = \frac{1}{2k+1}\). We investigate whether the functional F given below is strongly Lipschitzcontinuous on the set \(C := \left\{ x \in L_2[0,1] \,\mid \, \forall s\in [0,1],\; x(s) \le 1 \right\} \),
Consider the family of sets defined by
If the condition (7) were to hold for some bounded and regular set G, we would have by Theorem 1 that
and it would follow from Corollary 1 that
However, this leads to a contradiction since we also have
Therefore, the regularity condition (7) may not be satisfied for any bounded and regular set G, and F is not strongly Lipschitzcontinuous on C. \(\square \)
Remark 5
The result that the functional F in Example 3 is not strongly Lipschitzcontinuous on C is not in contradiction with Remark 4. Although F is Fréchet differentiable in \(L_2[0,1]\), the corresponding set G of the Fréchet derivatives of F is indeed unbounded.
Example 4
We again consider the Hilbert space \(H = L_2[0,1]\) of squareintegrable functions on the interval [0, 1] equipped with the standard inner product, and select the orthogonal basis functions \((\varPhi _k)_{k \in \mathbb N}\) as the Legendre polynomials on the interval [0, 1] with weighting factors \(\sigma _k = \frac{1}{2k+1}\). Our focus is on the ordinary differential equation (ODE)
where \(B \in \mathbb R^{n\times n}\) is a constant matrix; and \(f: \mathbb R^n \rightarrow \mathbb R^n\), a continuouslydifferentiable and globally Lipschitzcontinuous function, so that the solution trajectory \(x(\cdot ,u): [0,1] \rightarrow \mathbb R^n\) is welldefined for all \(u \in L_2[0,1]\). For simplicity, we consider the functional F given by
for some real vector \(c \in \mathbb R^n\). Moreover, the constraint set \(C \subseteq H\) may be any uniformly bounded function subset, such as simple uniform bounds of the form
The following developments aim to establish that F is strongly Lipschitzcontinuous on C.
By Taylor’s theorem, the defect \(\delta (t,u,e) := x(t,u+e)x(t,u)\) satisfies the differential equation
with \(\delta (0,u,e)=0\) and \(\varLambda (t,u,e) := \int _0^1 \, \frac{\partial f}{\partial x}( x(t,u) + \eta \delta (t,u,e) ) \, \mathrm {d}\eta \). The righthandside function f being globally Lipschitzcontinuous, we have for any given smooth matrixvalued function \(A: [0,1] \rightarrow \mathbb R^{n \times n}\),
for some constant \(\ell _1 < \infty \). For a particular choice of A, we can decompose \(\delta (t,u,e)\) into the sum \(\delta _\mathrm{l}(t,e) + \delta _\mathrm{n}(t,u,e,\delta _\mathrm{l})\) corresponding to the solution of the ODE system
with \(\delta _\mathrm{l}(0,e) = \delta _\mathrm{n}(0,u,e,\delta _\mathrm{l}) = 0\). In this decomposition, the lefthand side of (7) satisfies
Regarding the linear term \(\delta _\mathrm{l}\) first, we have
with
where \(\varGamma (t,\tau )\) denotes the fundamental solution of the linear ODE (9) such that
Since A is smooth, it follows from Example 2 that the set \(G:= \{ g_s \mid s \in [0,1] \}\) is both regular on C and bounded, and satisfies
Regarding the nonlinear term \(\delta _\mathrm{n}\), since the function \(\varLambda \) is uniformly bounded, applying Gronwall’s lemma to the ODE (10) gives
for some constant \(\ell <\infty \). Finally, combining (11) and (12) shows that F satisfies the condition (7) with \(L := 1 + \ell \exp (\ell )\), thus F is strongly Lipschitzcontinuous on C. \(\square \)
Remark 6
The functional F in the previous example is defined implicitly via the solution of an ODE. The result that such functionals are strongly Lipschitzcontinuous is particularly significant insofar as the proposed optimization framework will indeed encompass a broad class of optimal control problems as well as problems in the calculus of variations. In fact, it turns out that strong Lipschitzness still holds in replacing the constant matrix B in (8) with any matrixvalued continuously differentiable and globally Lipschitzcontinuous function of x(t, u), thus encompassing quite a general class of nonlinear affinecontrol systems. In the case of general nonlinear ODEs, however, strong Lipschitzness may be lost. Strong Lipschitzness could nevertheless be recovered by restricting condition (7) in Definition 5 as
with the projection error set \(E_C \; := \; \left\{ P_M(x)x \,  \, x \in C, \, M \in \mathbb N \, \right\} \subset H\), and also restricting the constraint set C to only contain uniformly bounded and Lipschitzcontinuous functions in \(L_2[0,1]\) with uniformly bounded Lipschitz constants.
We close this section with a brief analysis of the relationship between strong and classical Lipschitzness in infinitedimensional Hilbert space.
Lemma 2
Every strongly Lipschitzcontinuous functional \(F:H\rightarrow \mathbb R\) on C is also Lipschitzcontinuous on C.
Proof
Let G be a bounded and regular subset of H on C such that the condition (7) is satisfied. Since G is bounded, there exists a constant constant \(\alpha <\infty \) such that \(\sup _{g \in G} \left\ g \right\ _H \le \alpha \). Applying the Cauchy–Schwarz inequality to the righthand side of (7) gives
and so F is Lipschitzcontinuous on C. \(\square \)
Remark 7
With regularity of the set G alone, i.e. without boundedness, the condition (7) may not imply Lipschitzcontinuity, or even continuity of F. As a counterexample, let \(G := \mathrm {span} \left( \varPhi _0, \varPhi _1, \ldots , \varPhi _N \right) \) be the subspace spanned by the first N basis functions in the infinitedimensional Hilbert space H. It is clear that G is regular on any bounded set \(C\subset H\) since \(\overline{R}_C(M,G) = 0\) for all \(M \ge N\). Now, let the functional \(F:H\rightarrow \mathbb R\) given by
for some \(\hat{g} \in G\). For every \((x,e) \in C\times H\), we have
Therefore, despite being discontinuous, the condition (7) is indeed satisfied.
Remark 8
In general, Lipschitzcontinuity does not imply strong Lipschitzcontinuity in an infinitedimensional Hilbert space. A counterexample is easily contrived for the functional \(F:L_2[0,1]\rightarrow \mathbb R\) given by
Although this functional is Lipschitzcontinuous, it can be shown by a similar argument as in Example 3 that it is not strongly Lipschitzcontinuous.
Global optimization in Hilbert space using complete search
The application of completesearch strategies to infinitedimensional optimization problems such as (1) calls for an extension of the (spatial) branchandbound principle [23] to general Hilbert space. The approach presented in this section differs from branchandbound in that the dimension M of the search space is adjusted, as necessary, during the iterations of the algorithm, by using a socalled lifting operation—hence the name branchandlift algorithm. The basic idea is to bracket the optimal solution value of Problem (1) and progressively refine these bounds via this lifting mechanism, combined with traditional branching and fathoming.
Based on the developments in Sect. 2, the following subsections describe methods for exhaustive partitioning in infinitedimensional Hilbert space (Sect. 3.1) and for computing rigorous upper and lower bounds on given subsets of the variable domain (Sect. 3.2), before presenting the proposed branchandlift algorithm (Sect. 3.3).
Partitioning in infinitedimensional Hilbert space
Similar to branchandbound search, the proposed branchandlift algorithm maintains a partition \(\mathscr {A} := \{A_1,\ldots ,A_k\}\) of finitedimensional sets \(A_1,\ldots ,A_k\). This partition is updated through the repeated application of certain operations, including branching and lifting, in order to close the gap between an upper and a lower bound on the global solution value of the optimization problem (1). The following definition is useful in order to formalize these operations:
Definition 6
With each pair \((M,A) \in \mathbb N \times \mathscr {P}( \mathbb R^{M+1})\), we associate a subregion \(X_M(A)\) of H given by
Moreover, we say that the set A is infeasible if \(X_M(A) = \varnothing \).
Notice that each subregion \(X_M(A)\) is a convex set if the sets C and A are themselves convex. For practical reasons, we restrict ourselves to compact subsets \(A \in \mathbb S^{M+1} \subseteq \mathscr {P}( \mathbb R^{M+1})\) herein, where the class of sets \(\mathbb S^{M+1}\) is easily stored and manipulated by a computer. For example, \(\mathbb S^{M+1}\) could be a set of interval boxes, polytopes, ellipsoids, etc.
The ability to detect infeasibility of a set \(A \in \mathbb S^{M+1}\) is pivotal for complete search. Under the assumption that the constraint set C is convex (Assumption 1), a certificate of infeasibility can be obtained by considering the convex optimization problem
It readily follows from the Cauchy–Schwarz inequality that
for any (normalized) basis function \(\varPhi _k\), and so \(\Vert xy \Vert _H = 0\) implies \(\langle x, \varPhi _k \rangle = \langle y, \varPhi _k \rangle \). Consequently, a set A is infeasible if and only if \(d_C(A) > 0\). Because Slater’s constraint qualification holds for Problem (13) under Assumption 1, one approach to checking infeasibility to within high numerical accuracy relies on duality for computing lower bounds on the optimal solution value \(d_C(A)\)—similar in essence to the infinitedimensional convex optimization techniques in [4, 14]. For the purpose of this paper, our focus is on a general class of nonconvex objective functionals F, whereas the constraint set C is assumed to be convex and to have a simple geometry in order to avoid numerical issues in solving feasibility problems of the form (13). We shall therefore assume, from this point onwards, that infeasibility can be verified with high numerical accuracy for any set \(A \in \mathbb S^{M+1}\).
A branching operation subdivides any set \(A \in \mathbb S^{M+1}\) in the partition \(\mathscr {A}\) into two compact subsets \(A_\mathrm{l},A_\mathrm{r} \in \mathbb S^{M+1}\) such that \( A_\mathrm{l}\cup A_\mathrm{r} \supseteq A\), thereby updating the partition as
On the other hand, a lifting operation essentially lifts any set \(A \in \mathbb S^{M+1}\) into a higherdimensional space under the function \(\varGamma _M: \mathbb S^{M+1} \rightarrow \mathbb S^{M+2}\), defined such that
The question as to defining the higherorder coefficient \(\langle x, \varPhi _{M+1} \rangle \) in such a lifting is related to the so called moment problem that asks the question under which conditions on a sequence \((a_k)_{k \in \{ 1,\ldots ,N\}}\), named moment sequence, can we find an associated element \(x \in H\) with \(a_ k = \frac{\langle x, \varPhi _k \rangle }{\sigma _k} \) for each \(k \in \{ 1, \ldots , N \}\). Classical examples of such moment problems are Stieltjes’, Hamburger’s, and Legendre’s moment problems [1]. Here, we adopt the modern standpoint on moment problems using convex optimization [30, 42], by considering the following optimization subproblems:
Although both optimization problems in (14) are convex when A and C are convex, they remain infinitedimensional, and thus intractable in general. Obtaining lower and upper bounds \(\underline{a}_{M+1}(A)\), \(\overline{a}_{M+1}(A)\) is nonetheless straightforward under Assumption 1. In case no better approach is available, one can always use
which follows readily from the Cauchy–Schwarz inequality and the property that \(\Vert \varPhi _{M+1} \Vert _H = 1\). As already mentioned in the introduction of the paper, a variety of algorithms are now available for tackling convex infinite dimensional problems both efficiently and reliably [4, 14], which could provide tighter bounds in practical applications.
A number of remarks are in order:
Remark 9
The idea to introduce a lifting operation to enable partition in infinitedimensional function space was originally introduced by the authors in a recently publication [25], focusing on global optimization of optimal control problems. One principal contribution in the present paper is a generalization of these ideas to global optimization in any Hilbert space, by identifying a set of sufficient regularity conditions on the cost functional and constraint set for the resulting branchandlift algorithms to converge to an \(\varepsilon \)global solution in finite runtime.
Remark 10
Many recent optimization techniques for global optimization are based on the theory of positive polynomials and their associated linear matrix inequality (LMI) approximations [30, 45], which are also originally inspired by moment problems. Although these LMI techniques may be applied in the practical implementation of the aforementioned lifting operation, they are not directly related to the branchandlift algorithm that is developed in the following sections. An important motivation for moving away from the generic LMI framework is that the available implementations scale quite poorly with the number of optimization variables, due to the combinatorial increase of the number of monomials in the associated multivariate polynomial. Therefore, a direct approximation of the cost function F with multivariate polynomials would conflict with our primary objective to develop a global optimization algorithm whose worstcase runtime does not depend on the number of optimization variables.
Strategies for upper and lower bounding of functionals
Besides partitioning, the efficient construction of tight upper and lower bounds on the global solution value of (1) for given subregions of H is key in a practical implementation of branchandlift. Thereafter, functions \(L_M,U_M: \mathbb S^{M+1} \rightarrow \mathbb R\) such that
shall be call lower and upperbounding functions of the functional F, respectively. A simple approach to constructing these lower and upper bounds relies on the following twostep decomposition:

1.
Compute bounds \(L^0_M(A)\) and \(U^0_M(A)\) on the finitedimensional approximation of F as
$$\begin{aligned} \forall A \in \mathbb S^{M+1}, \quad L^0_M(A) \; \le \; \inf _{a \in A} \, F \left( \sum _{i=0}^M a_i \varPhi _i \right) \; \le \; U^0_M(A) \, . \end{aligned}$$(16)Clearly, it depends on the particular expression of F how to determine such bounds in practice. In the case that F is factorable, various arithmetics can be used to propagate bounds through a DAG of the function, including interval arithmetic [36], McCormick relaxations [9, 33], and Taylor/Chebyshev model arithmetic [10, 43, 47]. Moreover, if the expression of F is embedding a dynamic system described by differential equations, validated bounds can be obtained by using a variety of setpropagation techniques as described, e.g., in [26, 31, 38, 50, 53]; or via hierarchies of LMI relaxations as in [21, 29].

2.
Compute a bound \(\varDelta _M(A)\) on the approximation errors such that
$$\begin{aligned} \forall A \in \mathbb S^{M+1}, \quad&\left \inf _{x \in X_M(A)} \, F(x)  \inf _{a \in A} \, F \left( \sum _{i=0}^M a_i \varPhi _i \right) \right \;\le \; \varDelta _M(A) \, . \end{aligned}$$(17)In the case that F is strongly Lipschitzcontinuous on C, we can always take \(\varDelta _M(A) := L \, \overline{R}_C(M,G)\), where the constant \(L<\infty \) and the bounded regular set G satisfy the condition (7). Naturally, better bounds may be derived by exploiting a particular structure or expression of F.
By construction, the lowerbounding function \(L_M(A):=L^0_M(A)\varDelta _M(A)\) and the upperbounding function \(U_M(A):=U^0_M(A)+\varDelta _M(A)\) trivially satisfy (15). Moreover, when the set \(A \in \mathbb S^{M+1}\) is infeasible—see related discussion in Sect. 3.1—we may set \(\varDelta _M(A) = L_M(A) = U_M(A) = \infty \).
We state the following assumptions in anticipation of the convergence analysis in Sect. 4.
Assumption 3
The cost functional F in Problem (1) is strongly Lipschitzcontinuous on C, with the condition (7) holding for the constant \(L<\infty \) and the bounded regular subset \(G\subset H\).
Remark 11
Under Assumption 3, Lemma 2 implies that
for a Lipschitz constant \(L' \ge L \, \sup _{g \in G} \Vert g \Vert _H\). Thus, if Assumption 2 is also satisfied, any pair \((M,A) \in \mathbb N \times \mathbb S^{M+1}\) is such that
with \(K := L \, \sup _{k\in \mathbb N} \left\ \varPhi _k \right\ _H\) and \(d_1(A):=\sum _{i=0}^M \sup _{a,a'\in A}a_ia'_i\). It follows that
and therefore the gap \(U_M(A)  L_M(A)\) can be made arbitrarily small under Assumption 3 by choosing a sufficiently large order M and a sufficiently small diameter for the set A. This result will be exploited systematically in the convergence analysis in Sect. 4.
Remark 12
An alternative upper bound \(U_M(A)\) in (15) may be computed more directly by solving the following nonconvex optimization problem to local optimality,
Without further assumptions on the orthogonal basis functions \(\varPhi _0, \varPhi _1, \ldots \) and on the constraint set C, however, it is not hard to contrive examples where \(P_M(x) \notin C\) for all \(x \in C\) and all \(M \in \mathbb N\); that is, contrive examples where the upper bound (18) does not converge as \(M \rightarrow \infty \). This upperbounding approach could nonetheless be combined with another bounding approach based on set arithmetics in order to prevent convergence issues; e.g., use the solution value of (18) as long as it provides a bound that is smaller than \(U^0_M(A)+\varDelta _M(A)\).
Branchandlift algorithm
The foregoing considerations on partitioning and bounding in Hilbert space can be combined in Algorithm 1 for solving infinitedimensional optimization problems to \(\varepsilon \)global optimality.
A number of remarks are in order:

Regarding initialization, the branchandlift iterations starts with \(M = 0\). A possible way of initializing the partition \(\mathscr {A} = \left\{ A_0 \right\} \) is by noting that
$$\begin{aligned} \{ \langle x, \varPhi _0 \rangle \mid \, x \in C \, \} \; \subseteq \; \left[ \frac{\gamma }{\sigma _0}, \frac{\gamma }{\sigma _0}\right] \end{aligned}$$under Assumption 1.

Besides the branching and lifting operations introduced earlier in Sect. 3.1, fathoming in Step 4 of Algorithm 1 refers to the process of discarding a given set \(A\in \mathscr {A}\) from the partition if
$$\begin{aligned} \quad L_M(A) \, = \, \infty \quad \text {or} \quad \exists \, A' \in \mathscr {A}: \; \; L_M(A) \, > \, U_{M}(A') \, . \end{aligned}$$ 
The main idea behind the lifting condition defined in Step 6 of Algorithm 1, namely
$$\begin{aligned} \forall A\in \mathscr {A}, \quad U_{M}(A)L_{M}(A) \; \le \; 2(1+\rho ) \varDelta _M(A)\,, \end{aligned}$$(19)is that a subset A should be lifted to a higherdimensional space whenever the approximation error \(\varDelta _M(A)\) due to the finite parameterization becomes of the same order of magnitude as the current optimality gap \(U_{M}(A)L_{M}(A)\). The aim here is to apply as few lifts as possible, since it is preferable to branch in a lower dimensional space. The convergence of the branchandlift algorithm under this lifting condition is examined in Sect. 4 below. Notice also that a lifting operation is applied globally—that is, to all parameter subsets in the partition \(\mathscr {A}\)–in Algorithm 1, so all the subsets in \(\mathscr {A}\) share the same parameterization order at any iteration. In a variant of Algorithm 1, one could also imagine a family of subsets that would have different parameterization orders by applying the lifting condition locally instead.

Finally, it will be established in the following section that, upon termination and under certain assumptions, Algorithm 1 returns an \(\varepsilon \)suboptimal solution of Problem (1). In particular, Assumption 1 rules out the possibility of an infeasible solution.
Convergence analysis of branchandlift
This section investigates the convergence properties of the branchandlift algorithm (Algorithm 1) developed previously. It is convenient to introduce the following notation in order to conduct the analysis:
Definition 7
Let \(G \subseteq H\) be a regular set for C, and define the inverse function \(\overline{R}_C^{1}(\cdot , G): \mathbb R^{++} \rightarrow \mathbb N\) by
The following result is a direct consequence of the lifting condition (19) in the branchandlift algorithm:
Lemma 3
Let Assumption 3 hold, and suppose that finite bounds \(L^0_M(A)\), \(U^0_M(A)\) and \(\varDelta _M(A)\) satisfying (16)–(17) can be computed for any feasible pair \((M,A)\in \mathbb {N}\times \mathbb S^{M+1}\). Then, the number of lifting operations in a run of Algorithm 1 as applied to Problem (1) is at most
regardless of whether or not the algorithm terminates finitely.
Proof
Assume that \(M=\overline{M}\) in Algorithm 1, and that the termination condition is not yet satisfied; that is,
for a certain feasible set \(A \in \mathscr {A}\). If the lifting condition (19) were to hold for A, then it would follow from (16)–(17) that
Moreover, F being strongly Lipschitzcontinuous on C by Assumption 3, we would have
This is a contradiction, since \(\overline{R}_C({\overline{M}}, G) \le \frac{\varepsilon }{2 (\rho +1)L}\) by Definition 7. \(\square \)
Besides having a finite number of lifting operations, the convergence of Algorithm 1 can be established if the elements of a partition can be made arbitrarily small after applying a finite number of subdivisions.
Definition 8
A partitioning scheme is said to be exhaustive if, given any dimension \(M\in \mathbb N\), any tolerance \(\eta >0\), and any bounded initial partition \(\mathscr {A}=\{A_0\}\), we have
after finitely many subdivisions, where \(\mathrm {diam}\left( A\right) := \sup _{a,a' \in A} \; \Vert a  a' \Vert \). Moreover, we denoted by \(\varSigma (\eta ,M)\) an upper bound on the corresponding number of subdivisions in an exhaustive scheme.
The following theorem provides the main convergence result for the proposed branchandlift algorithm.
Theorem 2
Let Assumptions 1, 2 and 3 hold, and suppose that finite bounds \(L^0_M(A)\), \(U^0_M(A)\) and \(\varDelta _M(A)\) satisfying (16)–(17) can be computed for any feasible pair \((M,A)\in \mathbb {N}\times \mathbb S^{M+1}\). If the partitioning scheme is exhaustive, then Algorithm 1 terminates after at most \(\overline{\varSigma }\) iterations, where
Proof
By Lemma 3, the maximal number M of lifting operations during a run of Algorithm 1 is finite, such that \(M \le \overline{M}\). Therefore, the lifting condition (19) may not be satisfied for any feasible subset \(A\in \mathscr {A}\), and we have
Since \(L_M(A)=L^0_M(A)\varDelta _M(A)\) and \(U_M(A)=U^0_M(A)+\varDelta _M(A)\), it follows that the termination condition \(U_{M}(A) L_{M}(A) \le \varepsilon \) is satisfied if
By Assumptions 2 and 3 and Remark 11, we have
and the termination condition is thus satisfied if
This latter condition is met after at most \(\varSigma \left( \frac{\varepsilon \rho }{K (\rho +1)} , M \right) \) iterations under the assumption that the partitioning scheme is exhaustive. \(\square \)
Remark 13
In the case that the sets \(A \in \mathscr {A}\) are simple interval boxes and the lifting process is implemented per (14), we have
Therefore, one can always subdivide these boxes in such a way that the condition \(\mathrm {diam}\left( \mathscr {A}\right) \le \eta \) is satisfied after at most \(\varSigma (\eta ,M)\) subdivisions, with
for any given dimension M. In particular, \(\varSigma (\eta ,M)\) is monotonically increasing in M, and (20) simplifies to
It should be clear, at this point, that the worstcase estimate \(\overline{\varSigma }\) given in Theorem 2 may be extremely conservative, and the performance of Algorithm 1 could be much better in practice. Nonetheless, a key property of this estimate \(\overline{\varSigma }\) is that it is independent of the actual nature or the number of optimization variables in Problem (1), be it a finitedimensional or even an infinitedimensional optimization problem. As already pointed in the introduction of the paper, this result is quite remarkable since available runtime estimates for standard convex and nonconvex optimization algorithms do not enjoy this property. On the other hand, \(\overline{\varSigma }\) is dependent on:

the bound \(\gamma \) on the constraint set C;

the Lipschitz constants K and L of the cost functional F;

the uniform bound \(\sup _k \Vert \varPhi _k \Vert _H\) and the scaling factors \(\sigma _k\) of the chosen orthogonal functions \(\varPhi _k\); and

the lifting parameter \(\rho \) and the termination tolerance \(\varepsilon \) in Algorithm 1.
All these dependencies are illustrated in the following example.
Example 5
Consider the space of squareintegrable functions \(H:=L_2[\,\,\pi ,\pi ]\), for which it has been established in Remark 2 that any subset \(G^p\) of ptimes differentiable functions with uniformly Lipschitzcontinuous pth derivatives on \([\,\,\pi ,\pi ]\) is regular, with convergence rate \(\overline{R}_C(M,G^p) \le \alpha M^{p}\) for some constant \(\alpha <\infty \). On choosing the standard trigonometric Fourier basis, such that \(\sigma _k=\pi \) are constant scaling factors and \(K' := K\sup _k \Vert \varPhi _k \Vert _2 = K\), and doing the partitioning using simple interval boxes as in Remark 13, a worstcase iteration count can be obtained as
Furthermore, if the global minimizer of Problem (1) happens to be a smooth (\(\mathscr {C}^\infty \)) function, the convergence rate can be expected to be of the form \(R(M,G^\infty ) = \alpha \exp (\,\,\beta M)\), and Theorem 2 then predicts a worstcase iteration count as
which is much more favorable. \(\square \)
Numerical case study
We consider the Hilbert space \(H := L_2[0,T]\) of squareintegrable functions on the interval [0, T], here with \(T=10\). Our focus is on the following nonconvex, infinitedimensional optimization problem
with the functions \(f_1\) and \(f_2\) given by
Notice the symmetry in the optimization problem (21), as \(F(x) = F(\,\,x)\) and \(x \in C\) if and only if \(x \in C\). Thus, if \(x^*\) is a global solution point of (21), then \(\,\,x^*\) is also a global solution point.
Although it might be possible to apply techniques from the field of variational analysis to determine the set of optimal solutions, our main objective here is to apply Algorithm 1 without exploiting any particular knowledge about the solution set. For this, we use the Legendre polynomials as basis functions in \(L_2[0,T]\),
which are orthogonal by construction.
We start by showing that the functional F is strongly Lipschitzcontinuous, with the bounded regular subset G in condition (7) taken as
where we use the shorthand notation \(f_1^t(\tau ) := f_1(t  \tau )\) and \(f_2^t(\tau ) := f_2(t \tau )\). For all \(x\in L_2[0,T]\) and all \(e\in H\), we have
where L is any upper bound on the term
In order to obtain an explicit bound, we need to further analyze the term \(\sup _{g \in G} \, \left \langle g, e \rangle \right \). First of all, we have
Next, recalling that the Legendre approximation error for any smooth function \(g \in L_2[0,T]\) is bounded as
for all \(M \ge 1\), and working out explicit bounds on the derivatives of the functions \(f_1^t\) and \(f_2^t\), we obtain
It follows by Theorem 1 that
Combining all the bounds and substituting \(T = 10\) shows that the constant \(L = 611\) satisfies the condition (22).
Based on the foregoing developments and the considerations in Sect. 3.2, a simple bound \(\varDelta _M(A)\) on the approximation error satisfying (17) can be obtained as
Although rather loose for very small M, this estimate converges quickly to 0 for larger M; for instance, \(\varDelta _7(A) \le 2 \cdot 10^{4}\). Note also that, in a practical implementation, the computation of \(\varDelta _M(A)\)—and also to validate the generalized Lipschitz constant L—could be automated using computer algebra programs, such as Chebfun (http://www.chebfun.org/) [16] or MC++ (https://github.com/omegaicl/mcpp) [35].
With regards to the computation of bounds \(L^0_M(A)\) and \(U^0_M(A)\) satisfying (16), we note that F(x) can be interpreted as a quadratic form in x,
with the elements of the matrix Q given by
Of the available approaches [18, 39, 41] to compute bounds \(L^0_M(A)\) and \(U^0_M(A)\) such that
for interval boxes \(A \subseteq \mathbb R^{M+1}\), we use standard LMI relaxation techniques [20] here.
At this point, we have all the elements needed for implementing Algorithm 1 for Problem (21). On selecting the termination tolerance \(\varepsilon = 10^{5}\) and the lifting parameter \(\rho =1\), Algorithm 1 terminates after less than 100 iterations and applies 8 lifting operations (starting with \(M=1\)). The corresponding decrease in the gap between upper and lower bounds as a function of the lifted subspace dimension M—immediately after each lifting operation–is shown on the left plot of Fig. 1. Upon convergence, the infimum of (21) is bracketed as
and a corresponding \(\varepsilon \)global solution x is reported on the right plot of Fig. 1; the symmetric function \((\,\,x)\) provides another \(\varepsilon \)global solution for this problem. Overall, this case study demonstrates that the proposed branchandlift algorithm is thus capable of solving such nonconvex and infinitedimensional optimization problem to global optimality within reasonable computational effort.
Conclusions
This paper has presented a completesearch algorithm, called branchandlift, for global optimization of problems with a nonconvex cost functional and a bounded and convex constraint sets defined on a Hilbert space. A key contribution is the determination of runtime complexity bounds for branchandlift that are independent of the number of variables in the optimization problem, provided that the cost functional is strongly Lipschitzcontinuous with respect to a regular and bounded subset of that Hilbert space. The corresponding convergence conditions are satisfied for a large class of practically relevant problems in calculus of variations and optimal control. In particular, the complexity analysis in this paper implies that branchandlift can be applied to solve potentially nonconvex and infinitedimensional optimization problems without needing apriori knowledge about the existence or regularity of minimizers, as the runtime bounds solely depend on the structural and regularity properties of the cost functional as well as the underlying Hilbert space and the geometry of the constraint set. This could pave the way for a new complexity analysis of optimization problems, whereby the “complexity” or “hardness” of a problem does not necessarily depend on their number of optimization variables. In order to demonstrate that these algorithmic ideas and complexity analysis are not of pure theoretical interest only, the practical applicability of branchandlift has been illustrated with a numerical case study for a problem of calculus of variations. The case study of an optimal control problem in [25] provides another illustration.
Notes
 1.
We have used the integration formula \(\displaystyle \int e^{\sqrt{a x}} \, \mathrm {d}x = \frac{2 e^{\sqrt{ax}}(\sqrt{ax}1)}{a} + C\) for the integral term in (6).
References
 1.
Akhiezer, N.I.: The Classical Moment Problem and Some Related Questions in Analysis. Translated by N. Kemmer. Hafner Publishing Co., New York (1965)
 2.
Albersmeyer, J., Diehl, M.: The lifted Newton method and its application in optimization. SIAM J. Optim. 20(3), 1655–1684 (2010)
 3.
Anderson, E.J., Nash, P.: Linear Programming in InfiniteDimensional Spaces. Wiley, Hoboken (1987)
 4.
Bampou, D., Kuhn, D.: Polynomial approximations for continuous linear programs. SIAM J. Optim. 22(2), 628–648 (2012)
 5.
Bendsøe, M.P., Sigmund, O.: Topology Optimization: Theory, Methods, and Applications. Springer, Berlin (2004)
 6.
Betts, J.T.: Practical Methods for Optimal Control Using Nonlinear Programming. Advances in Design and Control Series, 2nd edn. SIAM, Philadelphia (2010)
 7.
Biegler, L.T.: Solution of dynamic optimization problems by successive quadratic programming and orthogonal collocation. Comput. Chem. Eng. 8, 243–248 (1984)
 8.
Bock, H.G., Plitt, K.J.: A multiple shooting algorithm for direct solution of optimal control problems. In: Proceedings 9th IFAC World Congress Budapest, pp. 243–247. Pergamon Press, Oxford (1984)
 9.
Bompadre, A., Mitsos, A.: Convergence rate of McCormick relaxations. J. Glob. Optim. 52(1), 1–28 (2012)
 10.
Bompadre, A., Mitsos, A., Chachuat, B.: Convergence analysis of Taylor and McCormickTaylor models. J. Glob. Optim. 57(1), 75–114 (2013)
 11.
Boyd, S., Vandenberghe, L.: Convex Optimization. University Press, Cambridge (2004)
 12.
Bryson, A.E., Ho, Y.: Applied Optimal Control. Hemisphere, Washington (1975)
 13.
Buie, R., Abrham, J.: Numerical solutions to continuous linear programming problems. Z. Oper. Res. 17(3), 107–117 (1973)
 14.
Devolder, O., Glineur, F., Nesterov, Y.: Solving infinitedimensional optimization problems by polynomial approximation. In: Diehl, M., Glineur, F., Jarlebring, E., Michiels, W. (eds.) Recent Advances in Optimization and its Applications in Engineering, pp. 31–40. Springer, Berlin Heidelberg (2010)
 15.
Ditzian, Z., Totik, V.: Moduli of Smoothness. Springer, Berlin (1987)
 16.
Driscoll, T.A., Hale, N., Trefethen, L.N.: Chebfun Guide. Pafnuty Publications, Oxford (2014)
 17.
Floudas, C.A.: Deterministic Global Optimization: Theory, Methods, and Applications. Kluwer, Dordrecht (1999)
 18.
Goemans, M.X., Williamson, D.P.: Improved approximation algorithms for maximum cut and satisfiability problems using semidefinite programming. J. ACM 42(6), 1115–1145 (1995)
 19.
Gottlieb, D., Shu, C.W.: On the Gibbs phenomenon and its resolution. SIAM Rev. 39(4), 644–668 (1997)
 20.
Henrion, D., Tarbouriech, S., Arzelier, D.: LMI approximations for the radius of the intersection of ellipsoids: a survey. J. Optim. Theory Appl. 108(1), 1–28 (2001)
 21.
Henrion, D., Korda, M.: Convex computation of the region of attraction of polynomial control systems. IEEE Trans. Autom. Control 59(2), 297–312 (2014)
 22.
Hinze, M., Pinnau, R., Ulbrich, M., Ulbrich, S.: Optimization with PDE Constraints. Springer, Berlin (2009)
 23.
Horst, R., Tuy, H.: Global Optimization: Deterministic Approaches, 3rd edn. Springer, Berlin, Germany (1996)
 24.
Houska, B., Ferreau, H.J., Diehl, M.: ACADO toolkit–an opensource framework for automatic control and dynamic optimization. Optim. Control Appl. Methods 32, 298–312 (2011)
 25.
Houska, B., Chachuat, B.: Branchandlift algorithm for deterministic global optimization in nonlinear optimal control. J. Optim. Theory Appl. 162(1), 208–248 (2014)
 26.
Houska, B., Villanueva, M.E., Chachuat, B.: Stable setvalued integration of nonlinear dynamic systems using affine set parameterizations. SIAM J. Numer. Anal. 53(5), 2307–2328 (2015)
 27.
Jackson, D.: The Theory of Approximation, vol. XI. AMS Colloquium Publication, New York (1930)
 28.
Katznelson, Y.: An Introduction to Harmonic Analysis, 2nd edn. Dover Publications, New York (1976)
 29.
Korda, M., Henrion, D., Jones, C.N.: Convex computation of the maximum controlled invariant set for polynomial control systems. SIAM J. Control Optim. 52(5), 2944–2969 (2014)
 30.
Lasserre, J.B.: Moments, Positive Polynomials and Their Applications. Imperial College Press, London (2009)
 31.
Lin, Y., Stadtherr, M.A.: Validated solutions of initial value problems for parametric ODEs. Appl. Numer. Math. 57(10), 1145–1162 (2007)
 32.
Luo, X., Bertsimas, D.: A new algorithm for stateconstrained separated continuous linear programs. SIAM J. Control Optim. 37, 177–210 (1998)
 33.
McCormick, G.P.: Computability of global solutions to factorable nonconvex programs: part Iconvex underestimating problems. Math. Program. 10, 147–175 (1976)
 34.
Misener, R., Floudas, C.A.: ANTIGONE: algorithms for continuous/integer global optimization of nonlinear equations. J. Glob. Optim. 59(2–3), 503–526 (2014)
 35.
Mitsos, A., Chachuat, B., Barton, P.I.: McCormick based relaxations of algorithms. SIAM J. Optim. 20, 573–601 (2009)
 36.
Moore, R.E.: Methods and Applications of Interval Analysis. SIAM Studies in Applied Mathematics. SIAM, Philadelphia (1979)
 37.
Mordukhovich, B.S.: Variational Analysis and Generalized Differentiation I: Basic Theory. Springer, Berlin (2006)
 38.
Neher, M., Jackson, K.R., Nedialkov, N.S.: On Taylor model based integration of ODEs. SIAM J. Numer. Anal. 45, 236–262 (2007)
 39.
Nemirovski, A., Roos, C., Terlaky, T.: On maximization of quadratic form over intersection of ellipsoids with common center. Math. Program. 86(3), 463–473 (1999)
 40.
Nesterov, Y., Nemirovskii, A.: InteriorPoint Polynomial Methods in Convex Programming. SIAM, Philadelphia (1994)
 41.
Nesterov, Y.: Semidefinite relaxation and nonconvex quadratic optimization. Optim. Methods Softw. 12, 1–20 (1997)
 42.
Nesterov, Y.: Squared functional systems and optimization problems. In: Frenk, H., Roos, K., Terlaky, T., Zhang, S. (eds.) High Performance Optimization, pp. 405–440. Kluwer Academic Publishers, Dordrecht (2000)
 43.
Neumaier, A.: Taylor forms—use and limits. Reliab. Comput. 9(1), 43–79 (2002)
 44.
Neumaier, A.: Complete search in continuous global optimization and constraint satisfaction. Acta Numer. 13, 271–369 (2004)
 45.
Parrilo, P.A.: Polynomial games and sum of squares optimization. In: Proceedings of the 45th IEEE Conference on Decision & Control, pp. 2855–2860. San Diego (CA) (2006)
 46.
Pontryagin, L.S., Boltyanskii, V.G., Gamkrelidze, R.V., Mishchenko, E.F.: The Mathematical Theory of Optimal Processes. Wiley, New York (1962)
 47.
Rajyaguru, J., Villanueva, M.E., Houska, B., Chachuat, B.: Chebyshev model arithmetic for factorable functions. J. Glob. Optim. 68(2), 413–438 (2017)
 48.
Saff, E.B., Totik, V.: Polynomial approximation of piecewise analytic functions. J. Lond. Math. Soc. 39(2), 487–498 (1989)
 49.
Sahinidis, N.V.: A general purpose global optimization software package. J. Glob. Optim. 8(2), 201–205 (1996)
 50.
Scott, J.K., Chachuat, B., Barton, P.I.: Nonlinear convex and concave relaxations for the solutions of parametric ODEs. Optim. Control Appl. Methods 34(2), 145–163 (2013)
 51.
von Stryk, O., Bulirsch, R.: Direct and indirect methods for trajectory optimization. Ann. Oper. Res. 37, 357–373 (1992)
 52.
Tawarmalani, M., Sahinidis, N.V.: A polyhedral branchandcut approach to global optimization. Math. Program. 103(2), 225–249 (2005)
 53.
Villanueva, M.E., Houska, B., Chachuat, B.: Unified framework for the propagation of continuoustime enclosures for parametric nonlinear ODEs. J. Glob. Optim. 62(3), 575–613 (2015)
 54.
Vinter, R.: Optimal Control. Springer, Berlin (2010)
 55.
Wang, H., Xiang, S.: On the convergence rates of Legendre approximation. Math. Comput. 81(278), 861–877 (2012)
Acknowledgements
This paper is based upon work supported by the Engineering and Physical Sciences Research Council (EPSRC) under Grant EP/J006572/1, National Natural Science Foundation of China (NSFC) under Grant 61473185, and ShanghaiTech University under Grant F020314012. Financial support from Marie Curie Career Integration Grant PCIG09GA2011293953 and from the Centre of Process Systems Engineering (CPSE) of Imperial College is gratefully acknowledged. The authors would like to thank CoEditor Dr. Sven Leyffer for his constructive comments about minimality of assumptions for the convergence of branchandlift.
Author information
Affiliations
Corresponding author
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Houska, B., Chachuat, B. Global optimization in Hilbert space. Math. Program. 173, 221–249 (2019). https://doi.org/10.1007/s1010701712157
Received:
Accepted:
Published:
Issue Date:
Keywords
 Infinitedimensional optimization
 Complete search
 Branchandlift
 Convergence analysis
 Complexity analysis
Mathematics Subject Classification
 49M30
 65K10
 90C26
 93B40