A Control-Theoretic Perspective on Optimal High-Order Optimization

We provide a control-theoretic perspective on optimal tensor algorithms for minimizing a convex function in a finite-dimensional Euclidean space. Given a function $\Phi: \mathbb{R}^d \rightarrow \mathbb{R}$ that is convex and twice continuously differentiable, we study a closed-loop control system that is governed by the operators $\nabla \Phi$ and $\nabla^2 \Phi$ together with a feedback control law $\lambda(\cdot)$ satisfying the algebraic equation $(\lambda(t))^p\|\nabla\Phi(x(t))\|^{p-1} = \theta$ for some $\theta \in (0, 1)$. Our first contribution is to prove the existence and uniqueness of a local solution to this system via the Banach fixed-point theorem. We present a simple yet nontrivial Lyapunov function that allows us to establish the existence and uniqueness of a global solution under certain regularity conditions and analyze the convergence properties of trajectories. The rate of convergence is $O(1/t^{(3p+1)/2})$ in terms of objective function gap and $O(1/t^{3p})$ in terms of squared gradient norm. Our second contribution is to provide two algorithmic frameworks obtained from discretization of our continuous-time system, one of which generalizes the large-step A-HPE framework and the other of which leads to a new optimal $p$-th order tensor algorithm. While our discrete-time analysis can be seen as a simplification and generalization of~\citet{Monteiro-2013-Accelerated}, it is largely motivated by the aforementioned continuous-time analysis, demonstrating the fundamental role that the feedback control plays in optimal acceleration and the clear advantage that the continuous-time perspective brings to algorithmic design. A highlight of our analysis is that we show that all of the $p$-th order optimal tensor algorithms that we discuss minimize the squared gradient norm at a rate of $O(k^{-3p})$, which complements the recent analysis.


Introduction
The interplay between continuous-time and discrete-time perspectives on dynamical systems has made a major impact on optimization theory. Classical examples include (1) the interpretation of steepest descent, heavy ball and proximal algorithms as the explicit and implicit discretization of gradientlike dissipative systems [Polyak, 1987, Antipin, 1994, Attouch and Cominetti, 1996, Alvarez, 2000, Attouch et al., 2000, Alvarez and Attouch, 2001; and (2) the explicit discretization of Newton-like and Levenberg-Marquardt regularized systems [Alvarez and Pérez C, 1998, Attouch and Redont, 2001, Alvarez et al., 2002, Attouch and Svaiter, 2011, Attouch et al., 2012, Maingé, 2013, Attouch et al., 2013, Abbas et al., 2014, Attouch et al., 2016a, Attouch and László, 2020b, which give standard and regularized Newton algorithms. One particularly salient way that these connections have spurred research is via the use of Lyapunov functions to transfer asymptotic behavior and rates of convergence between continuous time and discrete time.
The introduction of the Hessian-driven damping into continuous-time dynamics has been a particular milestone in optimization and mechanics. The precursor of this perspective can be found in the variational characterization of the Levenberg-Marquardt method and Newton's method [Alvarez and Pérez C, 1998], a development that inspired work on continuous-time Newton-like approaches for convex minimization [Alvarez andPérez C, 1998, Attouch andRedont, 2001] and monotone inclusions [Attouch and Svaiter, 2011, Maingé, 2013, Attouch et al., 2013, Abbas et al., 2014, Attouch et al., 2016a, Attouch and László, 2020b. Building on these works, Alvarez et al. [2002] distinguished Hessian-driven damping from classical continuous Newton formulations and showed its importance in optimization and mechanics. Subsequently, Attouch et al. [2016b] demonstrated the connection between Hessian-driven damping and the forward-backward algorithms in Nesterov acceleration (e.g., FISTA), and combined Hessiandriven damping with asymptotically vanishing damping [Su et al., 2016]. The resulting dynamics takes the following form:ẍ (t) + α tẋ (t) + β∇ 2 Φ(x(t))ẋ(t) + ∇Φ(x(t)) = 0, (1.1) where it is worth mentioning that the presence of the Hessian does not entail numerical difficulties since it arises in the form ∇ 2 Φ(x(t))ẋ(t), which is the time derivative of the function t → ∇Φ(x(t)). Further work in this vein appeared in Shi et al. [2018], where Nesterov acceleration was interpreted via multiscale limits that yield high-resolution differential equations: ∇Φ(x(t)) = 0. (1.2) These limits were used in particular to distinguish between Polyak's heavy-ball method and NAG, which are not distinguished by naive limiting arguments that yield the same differential equation for both. Althought the coefficients are different in Eq. (1.1) and Eq. (1.2), both contain Hessian-driven damping, which corresponds to a correction term obtained via discretization, and which provides fast convergence to zero of the gradients and reduces the oscillatory aspects. Using this viewpoint, several subtle analyses have been recently provided in work independent of ours [Attouch et al., , 2021a. In particular, they develop a convergence theory for a general inertial system with asymptotic vanishing damping and Hessian-driven damping. Under certain conditions, the fast convergence is guaranteed in terms of both objective function gap and squared gradient norm. Beyond the aforementioned line of work, however, most of the focus in using continuous-time perspectives to shed light on acceleration has been restricted to the setting of first-order optimization algorithms. As noted in a line of recent work [Monteiro and Svaiter, 2013, Nesterov, 2018, Arjevani et al., 2019, Gasnikov et al., 2019, Bubeck et al., 2019, Song et al., 2021, there is a significant gap in our understanding of optimal p-th order tensor algorithms with p ≥ 2, with existing algorithms and analysis being much more involved than NAG.
In this paper, we show that a continuous-time perspective helps to bridge this gap and yields a unified perspective on first-order and higher-order acceleration. We refer to our work as a controltheoretic perspective, as it involves the study of a closed-loop control system that can be viewed as a differential equation that is governed by a feedback control law, λ(·), satisfying the algebraic equation (λ(t)) p ∇Φ(x(t)) p−1 = θ for some θ ∈ (0, 1). Our approach is similar to that of Attouch et al. [2013Attouch et al. [ , 2016a, for the case without inertia, and it provides a first step into a theory of the autonomous inertial systems that link closed-loop control and optimal high-order tensor algorithms. Mathematically, our system can be written as follows: where (α, β, b) explicitly depends on the variables (x, λ, a), the parameters c > 0, θ ∈ (0, 1) and the order p ∈ {1, 2, . . .}: α(t) = 2ȧ(t) a(t) −ä (t) a(t) , β(t) = (ȧ(t)) 2 a(t) , b(t) =ȧ (t)(ȧ(t)+ä(t)) a(t) , a(t) = 1 4 ( t 0 λ(s)ds + c) 2 , (λ(t)) p ∇Φ(x(t)) p−1 = θ. (1.4) The initial condition is x(0) = x 0 ∈ {x ∈ R d | ∇Φ(x) = 0} andẋ(0) ∈ R d . Note that this condition is not restrictive since ∇Φ(x 0 ) = 0 implies that the optimization problem has been already solved. A key ingredient in our system is the algebraic equation (λ(t)) p ∇Φ(x(t)) p−1 = θ, which links the feedback control law λ(·) and the gradient norm ∇Φ(x(·)) , and which generalizes an equation appearing in Attouch et al. [2016a] for modeling the proximal Newton algorithm. We recall that Eq. (1.3) has also been studied in Attouch et al. [ , 2021a, who provide a general convergence result when (α, β, b) satisfies certain conditions. However, when p ≥ 2, the specific choice of (α, β, b) in Eq. (1.4) does not have an analytic form and it thus seems difficult to verify whether (α, β, b) in our control system satisfies that condition (see Attouch et al. [2021a, Theorem 2.1])). This topic is beyond the scope of this paper and we leave its investigation to future work.
Our contribution. Throughout the paper, unless otherwise indicated, we assume that Φ : R d → R is convex and twice continuously differentiable and the set of global minimizers of Φ is nonempty.
As we shall see, our main results on the existence and uniqueness of solutions and convergence properties of trajectories are valid under this general assumption. We also believe that this general setting paves the way for extensions to nonsmooth convex functions or maximal monotone operators (replacing the gradient by the subdifferential or the operator) [Alvarez et al., 2002, Attouch et al., 2012, 2016b. This is evidenced by the equivalent first-order reformulations of our closed-loop control system in time and space (without the occurrence of the Hessian). However, we do not pursue these extensions in the current paper.
The main contributions of our work are the following: 1. We study the closed-loop control system of Eq. (1.3) and Eq. (1.4) and prove the existence and uniqueness of a local solution. We show that when p = 1 and c = 0, our feedback law reduces to λ(t) = θ and our overall system reduces to the high-resolution differential equation studied in Shi et al. [2018], showing explicitly that our system extends the high-resolution framework from first-order optimization to high-order optimization.
2. We construct a simple yet nontrivial Lyapunov function that allows us to establish the existence and uniqueness of a global solution under regularity conditions (see Theorem 3.1). We also use the Lyapunov function to analyze the convergence rates of the solution trajectories; in particular, we show that the convergence rate is O(t −(3p+1)/2 ) in terms of objective function gap and O(t −3p ) in terms of squared gradient norm.
3. We provide two algorithmic frameworks based on the implicit discretization of our closed-looped control system, one of which generalizes the large-step A-HPE in Monteiro and Svaiter [2013]. Our iteration complexity analysis is largely motivated by the aforementioned continuous-time analysis, simplifying the analysis in Monteiro and Svaiter [2013] for the case of p = 2 and generalizing it to p > 2 in a systematic manner (see Theorem 4.3 and 4.6 for the details).
4. We combine the algorithmic frameworks with an approximate tensor subroutine, yielding a suite of optimal p-th order tensor algorithms for minimizing a convex smooth function Φ which has Lipschitz p-th order derivatives. The resulting algorithms include not only the algorithms studied in the previous works [Gasnikov et al., 2019, Bubeck et al., 2019 but also yield a new optimal p-th order tensor algorithm. A highlight of our analysis is to show that all these p-th order optimal algorithms minimize the squared gradient norm at a rate of O(k −3p ), complementing the recent analysis in the aforementioned works.
Further related work. In addition to the aforementioned works, we provide a few additional remarks regarding related work on accelerated first-order and high-order algorithms for convex optimization. A significant body of recent work in convex optimization focuses on understanding the underlying principle behind Nesterov's accelerated first-order algorithm (NAG) [Nesterov, 1983[Nesterov, , 2018, with a particular focus on the interpretation of Nesterov acceleration as a temporal discretization of a continuous-time dynamical system [Krichene et al., 2015, Su et al., 2016, Attouch and Peypouquet, 2016, May, 2017, Attouch et al., 2019b, 2018, Vassilis et al., 2018, Shi et al., 2018, Diakonikolas and Orecchia, 2019, Muehlebach and Jordan, 2019, Attouch and Peypouquet, 2019, Attouch et al., 2019a, Sebbouh et al., 2020, Attouch and Cabot, 2020,a, 2021a, Adly and Attouch, 2021. A line of new first-order algorithms have been obtained from the continuous-time dynamics by various advanced numerical integration strategies [Scieur et al., 2017, Betancourt et al., 2018, Zhang et al., 2018, Maddison et al., 2018, Shi et al., 2019, Wilson et al., 2019. In particular, Scieur et al. [2017] showed that a basic gradient flow system and multi-step integration scheme yields a class of accelerated first-order optimization algorithms. Zhang et al. [2018] applied Runge-Kutta integration to an inertial gradient system without Hessian-driven damping [Wibisono et al., 2016] and showed that the resulting algorithm is faster than NAG when the objective function is sufficiently smooth and when the order of the integrator is sufficiently large. Maddison et al. [2018] and França et al. [2020] both considered conformal Hamiltonian systems and showed that the resulting discrete-time algorithm achieves fast convergence under certain smoothness conditions. Very recently, Shi et al. [2019] have rigorously justified the use of symplectic Euler integrators compared to explicit and implicit Euler integration, which was further studied by Muehlebach and Jordan [2021] and França et al. [2021]. Unfortunately, none of these approaches are suitable for interpreting optimal acceleration in high-order tensor algorithms.
Research on acceleration in the second-order setting dates back to Nesterov's accelerated cubic regularized Newton algorithm (ACRN) [Nesterov, 2008] and Monteiro and Svaiter's accelerated Newton proximal extragradient (A-NPE) [Monteiro and Svaiter, 2013]. The ACRN algorithm was extended to a p-th order tensor algorithm with the improved convergence rate of O(k −(p+1) ) [Baes, 2009] and an adaptive p-th order tensor algorithm with essentially the same rate [Jiang et al., 2020]. This novel extension was also revisited by Nesterov [2019] with a discussion on the efficient implementation of a third-order tensor algorithm. Meanwhile, within the alternative A-NPE framework, a p-th order tensor algorithm was studied in a line of works [Gasnikov et al., 2019, Bubeck et al., 2019 and was shown to achieve a convergence rate of O(k −(3p+1)/2 ), matching the lower bound [Arjevani et al., 2019]. Subsequently, a high-order coordinate descent algorithm was studied in Amaral et al. [2020], and very recently, the high-order A-NPE framework has been specialized to the strongly convex setting [Alves, 2021], generalizing the discrete-time algorithms in this paper with an improved convergence rate. Beyond the setting of Lipschitz continuous derivatives, high-order algorithms and their accelerated variants have been adapted for more general setting with Hölder continuous derivatives [Grapiglia and Nesterov, 2017, 2019, Doikov and Nesterov, 2019, Grapiglia and Nesterov, 2020b and an optimal algorithm has been proposed in Song et al. [2021]. Other settings include structured convex non-smooth minimization [Bullins, 2020], convex-concave minimax optimization and monotone variational inequalities [Bullins andLai, 2020, Ostroukhov et al., 2020], and structured smooth convex minimization [Nesterov, 2020b,a, Kamzolov, 2020, Kamzolov and Gasnikov, 2020. In the nonconvex setting, high-order algorithms have been also proposed and analyzed [Birgin et al., 2016, Martínez, 2017, Cartis et al., 2018, 2019.
Unfortunately, the derivations of these algorithms do not flow from a single underlying principle but tend to involve case-specific algebra. As in the case of first-order algorithms, one would hope that a continuous-time perspective would offer unification, but the only work that we are aware of in this regard is Song et al. [2021], and the connection to dynamical systems in that work is unclear. In particular, some aspects of the UAF algorithm (see Song et al. [2021, Algorithm 5.1]), including the conditions in Eq. (5.31) and Eq. (5.32), do not have a continuous-time interpretation but rely on case-specific algebra. Moreover, their continuous-time framework reduces to an inertial system without Hessian-driven damping in the first-order setting, which has been proven to be an inaccurate surrogate as mentioned earlier.
We have been also aware of other type of discrete-time algorithms [Zhang et al., 2018, Maddison et al., 2018, Wilson et al., 2019 which were derived from continuous-time perspective with theoretical guarantee under certain condition. In particular, Wilson et al. [2019] derived a family of first-order algorithms by appeal to the explicit time discretization of the accelerated rescaled gradient dynamics. Their new algorithms are guaranteed to (surprisingly) achieve the same convergence rate as the existing optimal tensor algorithms [Gasnikov et al., 2019, Bubeck et al., 2019. However, the strong smoothness assumption is necessary and might rule out many interesting application problems. In contrast, all the optimization algorithms developed in this paper are applicable for general convex and smooth problems with the optimal rate of convergence.
Organization. The remainder of the paper is organized as follows. In Section 2, we study the closedloop control system in Eq. (1.3) and Eq. (1.4) and prove the existence and uniqueness of a local solution using the Banach fixed-point theorem. In Section 3, we show that our system permits a simple yet nontrivial Lyapunov function which allows us to establish the existence and uniqueness of a global solution and derive convergence rates of solution trajectories. In Section 4, we provide two conceptual algorithmic frameworks based on the implicit discretization of our closed-loop control system as well as specific optimal p-th order tensor algorithms. Our iteration complexity analysis is largely motivated by the continuous-time analysis of our system, demonstrating that these algorithms achieve fast gradient minimization. In Section 5, we conclude our work with a brief discussion on future research directions.
Notation. We use bold lower-case letters such as x to denote vectors, and upper-case letters such as X to denote tensors. For a vector x ∈ R d , we let x denote its ℓ 2 Euclidean norm and let B δ ( and denote by X op = max z i =1,1≤j≤p X[z 1 , · · · , z p ] its operator norm. Fix p ≥ 1, we define F p ℓ (R d ) as the class of convex functions on R d with ℓ-Lipschitz p-th order derivatives; that is, f ∈ F p ℓ (R d ) if and only if f is convex and is the p-th order derivative tensor of f at x ∈ R d . More specifically, for {z 1 , z 2 , . . . , z p } ⊆ R d , we have Given a tolerance ǫ ∈ (0, 1), the notation a = O(b(ǫ)) stands for an upper bound, a ≤ Cb(ǫ), in which C > 0 is independent of ǫ.

The Closed-Loop Control System
In this section, we study the closed-loop control system in Eq. (1.3) and Eq. (1.4). We start by rewriting our system as a first-order system in time and space (without the occurrence of the Hessian) which is important to our subsequent analysis and implicit time discretization. Then, we analyze the algebraic equation (λ(t)) p ∇Φ(x(t)) p−1 = θ for θ ∈ (0, 1) and prove the existence and uniqueness of a local solution by appeal to the Banach fixed-point theorem. We conclude by discussing other systems in the literature that exemplify our general framework.

First-order system in time and space
We rewrite the closed-loop control system in Eq. (1.3) and Eq. (1.4) as follows: where (α, β, b) explicitly depend on the variables (x, λ, a), the parameters c > 0, θ ∈ (0, 1) and the order p ∈ {1, 2, . . .}: By multiplying both sides of the first equation by a(t) a(t) and using the definition of α(t), β(t) and b(t), we have Defining z 1 (t) = a(t) a(t)ẋ (t) and z 2 (t) =ȧ(t)∇Φ(x(t)), we havė Putting these pieces together yieldṡ Integrating this equation over the interval [0, t], we have . Putting these pieces together with the definition of z 1 (t) and z 2 (t), we have This implies that z 1 (0) + x(0) + z 2 (0) is completely determined by the initial condition and parameters c > 0 and θ ∈ (0, 1). For simplicity, we define v 0 := z 1 (0) + x(0) + z 2 (0) and rewrite Eq. (2.1) in the following form: By introducing a new variable v(t) = v 0 − t 0ȧ (s)∇Φ(x(s))ds, we rewrite Eq. (2.2) in the following equivalent form: Summarizing, the closed-loop control system in Eq. (1.3) and Eq. (1.4) can be written as a first-order system in time and space as follows: We also provide another first-order system in time and space with different variable (x, v, λ, γ). We study this system because its implicit time discretization leads to a new algorithmic framework which does not appear in the literature. This first-order system is summarized as follows: By the definition of a(t) and γ(t), we have a(t) = 1 γ(t) which implies thatȧ(t) = −γ (t) γ 2 (t) .
Remark 2.2 The first-order systems in Eq. (2.3) and Eq. (2.4) pave the way for extensions to nonsmooth convex functions or maximal monotone operators (replacing the gradient by the subdifferential or the operator), as done in Alvarez et al. [2002] and Attouch et al. [2012Attouch et al. [ , 2016b. In this setting, either the open-loop case or the closed-loop case without inertia has been studied in the literature [Attouch and Svaiter, 2011, Maingé, 2013, Attouch et al., 2013, Abbas et al., 2014, Attouch et al., 2016a, Bot and Csetnek, 2016, Attouch and Cabot, 2018, Attouch and László, 2020b], but there is significantly less work on the case of a closed-loop control system with inertia. For recent progress in this direction, see  and references therein.
In view of Proposition 2.3, for any fixed point x with ∇Φ(x) = 0, there exists a unique λ > 0 such that ϕ(λ, x) = θ 1/p for some θ ∈ (0, 1). We accordingly define Ω ⊆ R d and the mapping Λ θ : Ω → (0, ∞) as follows: We now provide several basic results concerning Ω and Λ θ (·) which are crucial to the proof of existence and uniqueness presented in the next subsection.

Proposition 2.4 The set Ω is open.
Proof. Given x ∈ Ω, it suffices to show that B δ (x) ⊆ Ω for some δ > 0. Since Φ is twice continuously differentiable, ∇Φ is locally Lipschitz; that is, there existsδ > 0 and L > 0 such that Combining this inequality with the triangle inequality, we have }. Then, for any z ∈ B δ (x), we have This completes the proof.
For p ≥ 2, the function x − p−1 p is locally Lipschitz at any point x > 0. Also, by Proposition 2.4, Ω is an open set. Putting these pieces together yields that ∇Φ(·) − p−1 p is locally Lipschitz over Ω; that is, there exist δ > 0 and L > 0 such that . This completes the proof.

Existence and uniqueness of a local solution
We prove the existence and uniqueness of a local solution of the closed-loop control system in Eq. (1.3) and Eq. (1.4) by appeal to the Banach fixed-point theorem. Using the results in Section 2.1 (see Eq. (2.2)), our system can be equivalently written as follows: Using the mapping Λ θ : Ω → (0, ∞) (see Eq. (2.6)), this system can be further formulated as an autonomous system. Indeed, we have Putting these pieces together, we arrive at an autonomous system in the following compact form: where the vector field F : [0, +∞) × Ω → R d is given by (2.8) A common method for proving the existence and uniqueness of a local solution is via appeal to the Cauchy-Lipschitz theorem [Coddington and Levinson, 1955, Theorem I.3.1]. This theorem, however, requires that F (t, x) be continuous in t and Lipschitz in x, and this is not immediate in our case due to the appearance of t 0 Λ θ (x(s))ds. We instead recall that the proof of the Cauchy-Lipschitz theorem is generally based on the Banach fixed-point theorem [Granas and Dugundji, 2013], and we avail ourselves directly of the latter theorem. In particular, we construct Picard iterates ψ k whose limit is a fixed point of a contraction T . We have the following theorem.
Theorem 2.6 There exists t 0 > 0 such that the autonomous system in Eq. (2.7) and Eq.
Proof. By Proposition 2.4 and the initial condition x 0 ∈ Ω, there exists δ > 0 such that B δ (x 0 ) ⊆ Ω. Note that Φ is twice continuously differentiable. By the definition of Λ θ , we obtain that Λ θ (z) and ∇Φ(z) are both bounded for any z ∈ B δ (x 0 ). Putting these pieces together shows that there exists M > 0 such that, for any continuous function x : (2.9) The set of such functions is not empty since a constant function x = x 0 is one element. Letting t 1 = min{1, δ M }, we define X as the space of all continuous functions x on [0, t 0 ] for some t 0 < t 1 whose graph is contained entirely inside the rectangle [0, t 0 ] × B δ (x 0 ). For any x ∈ X , we define Putting these pieces together yields that T maps X to itself. By the fundamental theorem of calculus, we haveż(t) = F (t, x(t)). By a standard argument from ordinary differential equation theory,ẋ(t) = F (t, x(t)) and x(0) = x 0 if and only if x is a fixed point of T . Thus, it suffices to show the existence and uniqueness of a fixed point of T . We consider the Picard iterates {ψ k } k≥0 with ψ 0 (t) = x 0 for ∀t ∈ [0, t 0 ] and ψ k+1 = T ψ k for all k ≥ 0. By the Banach fixed-point theorem [Granas and Dugundji, 2013], the Picard iterates converge to a unique fixed point of T if X is an nonempty and complete metric space and T is a contraction from X to X .
First, we show that X is an nonempty and complete metric space.
It is easy to verify that d is a metric and (X , d) is a complete metric space (see Sutherland [2009] for the details). In addition, X is nonempty since the constant function x = x 0 is one element.
It remains to prove that T is a contraction for some t 0 < t 1 . Indeed, Λ θ (z) and ∇Φ(z) are bounded for ∀z ∈ B δ (x 0 ); that is, there exists M 1 > 0 such that max{Λ θ (z), ∇Φ(z) } ≤ M 1 for ∀z ∈ B δ (x 0 ). By Proposition 2.5, Λ θ and √ Λ θ are continuous and locally Lipschitz over Ω. (2.10) Note that Φ is twice continuously differentiable. Thus, there exists L 2 > 0 such that We now proceed to the main proof. By the triangle inequality, we have The key inequality for the subsequent analysis is as follows: and Eq. (2.10), we obtain: Second, we combine Eq. (2.11) with Λ θ (x(t)) ≤ √ M 1 , Eq. (2.10) and 0 < s ≤ t 0 < t 1 < 1 to obtain: We also obtain by combining Eq. (2.11) with max{Λ θ (x(t)), In addition, by using max{Λ θ (x(t)), ∇Φ(x(t)) } ≤ M 1 and 0 < w ≤ s ≤ t 0 < t 1 < 1, we have Putting these pieces together yields that Finally, by a similar argument, we have Combining the upper bounds for I, II and III, we have whereM is a constant that does not depend on t 0 (in fact it depends on c, x 0 , δ, Φ(·) and Λ θ (·)) and is defined as follows: 2M . This completes the proof.

Discussion
We compare the closed-loop control system in Eq. (1.3) and Eq. (1.4) with four main classes of systems in the literature.
Hessian-driven damping. The formal introduction of Hessian-driven damping in optimization dates to Alvarez et al. [2002], with many subsequent developments; see, e.g., Attouch et al. [2016b]. The system studied in this literature takes the following form: In a Hilbert space setting and when α > 3, the literature has established the weak convergence of any solution trajectory to a global minimizer of Φ and the convergence rate of o(1/t 2 ) in terms of objective function gap. Recall also that Shi et al. [2018] interpreted Nesterov acceleration as the discretization of a highresolution differential equation: and showed that this equation distinguishes between Polyak's heavy-ball method and Nesterov's accelerated gradient method. In the special case in which c = 0 and p = 1, our system in Eq. (1.3) and Eq. (1.4) becomesẍ which also belongs to the class of high-resolution differential equations. Moreover, for c = 0 and p = 1, our system can be studied within the recently-proposed framework of Attouch et al. [ , 2021a; indeed, in this case (α, β, b) in Attouch et al. [2021a, Theorem 2.1] has an analytic form. However, the choice of (α, β, b) in our general setting in Eq. (1.4), for p ≥ 2, does not have an analytic form and it is difficult to verify whether (α, β, b) in this case satisfies their condition.
Newton and Levenberg-Marquardt regularized systems. The precursor of this perspective was developed by Alvarez and Pérez C [1998] in a variational characterization of general regularization algorithms. By constructing the regularization of the potential function Φ(·, ǫ) satisfying Φ(·, ǫ) → Φ as ǫ → 0, they studied the following system: Subsequently, Attouch and Redont [2001] and Attouch and Svaiter [2011] studied Newton dissipative and Levenberg-Marquardt regularized systems: These systems have been shown to be well defined and stable with robust asymptotic behavior [Attouch and Svaiter, 2011, Attouch et al., 2013, Abbas et al., 2014, further motivating the study of the following inertial gradient system with constant damping and Hessian-driven damping [Alvarez et al., 2002]: This system attains strong asymptotic stabilization and fast convergence properties [Alvarez et al., 2002, Attouch et al., 2012 and can be extended to solve the monotone inclusion problems with theoretical guarantee [Attouch and Svaiter, 2011, Maingé, 2013, Attouch et al., 2013, Abbas et al., 2014, Attouch et al., 2016a, Attouch and László, 2020b. However, all of these systems are aimed at interpreting standard and regularized Newton algorithms and fail to model optimal acceleration even for the second-order algorithms in Monteiro and Svaiter [2013].
Recently, Attouch et al. [2016a] proposed a proximal Newton algorithm for solving monotone inclusions, which is motivated by a closed-loop control system without inertia. This algorithm attains a suboptimal convergence rate of O(t −2 ) in terms of objective function gap.
Closed-loop control systems. The closed-loop damping approach in Attouch et al. [2013Attouch et al. [ , 2016a closely resembles ours. In particular, they interpret various Newton-type methods as the discretization of the closed-loop control system without inertia and prove the existence and uniqueness of a solution as well as the convergence rate of the solution trajectory. There are, however, some significant differences between our work and theirs. In particular, the appearance of inertia is well known to make analysis much more challenging. Standard existence and uniqueness proofs based on the Cauchy-Schwarz theorem suffice to analyze the system of Attouch et al. [2013Attouch et al. [ , 2016a thanks to the lack of inertia, while Picard iterates and the Banach fixed-point theorem are necessary for our analysis. The construction of the Lyapunov function is also more difficult for the system with inertia. This is an active research area and we refer the interested reader to a recent article of  for a comprehensive treatment of this topic.
Continuous-time interpretation of high-order tensor algorithms. There is comparatively little work on continuous-time perspectives on high-order tensor algorithms; indeed, we are aware of only Wibisono et al. [2016] and Song et al. [2021].
By appealing to a variational formulation, Wibisono et al. [2016] derived the following inertial gradient system with asymptotic vanishing damping: Solving the minimization problem yields z(t) = x 0 − t 0ȧ (s)∇Φ(x(s))ds. Substituting and rearranging yields:ẍ (2.14) Compared to our closed-loop control system, the system in (2.14) is open-loop and lacks Hessian-driven damping. Moreover, a(t) needs to be determined by hand and Song et al. [2021] do not establish existence or uniqueness of solutions.

Lyapunov Function
In this section, we construct a Lyapunov function that allows us to prove existence and uniqueness of a global solution of our closed-loop control system and to analyze convergence rates. As we will see, an analysis of the rate of decrease of the Lyapunov function together with the algebraic equation permit the derivation of new convergence rates for both the objective function gap and the squared gradient norm.

Existence and uniqueness of a global solution
Our main theorem on the existence and uniqueness of a global solution is summarized as follows. Remark 3.2 Intuitively, the feedback law λ(·), which we will show satisfies λ(t) → +∞ as t → +∞, links to the gradient norm ∇Φ(x(·)) via the algebraic equation. Since we are interested in the worstcase convergence rate of solution trajectories, which corresponds to the worst-case iteration complexity of discrete-time algorithms, it is necessary that λ does not dramatically change. In open-loop Levenberg-Marquardt systems, Attouch and Svaiter [2011] impose the same condition on the regularization parameters. In closed-loop control systems, however, λ is not a given datum but an emergent component of the dynamics. Thus, it is preferable to prove that λ satisfies this condition rather than assuming it, as done in Attouch et al. [2013, Theorem 5.2] and Attouch et al. [2016a, Theorem 2.4] for a closed-loop control system without inertia. The key step in their proof is to show that λ(t) ≤ λ(0)e t locally by exploiting the specific structure of their system. This technical approach is, however, not applicable to our system due to the incorporation of the inertia term; see Section 3.3 for further discussion.
Recall that the system in Eq. (1.3) and Eq. (1.4) can be equivalently written as the first-order system in time and space, as in Eq. (2.3). Accordingly, we define the following simple Lyapunov function: where x ⋆ is a global optimal solution of Φ.
[2021] construct a unified time-dependent Lyapunov function using the Bregman divergence and showed that their approach is equivalent to Nesterov's estimate sequence technique in a number of cases, including quasi-monotone subgradient, accelerated gradient descent and conditional gradient. Our Lyapunov function differs from existing choices in that v is not a standard momentum term depending onẋ, but depends on x, λ and ∇Φ; see Eq. (2.3).
We provide two technical lemmas that characterize the descent property of E and the boundedness of the local solution ( is a local solution of the first-order system in Eq. (2.3). Then, we have Proof. By the definition, we have Putting these pieces together yields: By the convexity of Φ, we have Φ( This together with the algebraic equation implies II ≤ −a(t)θ 1 p ∇Φ(x(t)) p+1 p . Putting all these pieces together yields the desired inequality.
is a local solution of the first-order system in Eq. (2.3). Then, (x(·), v(·)) is bounded over the interval [0, t 0 ] and the upper bound only depends on the initial condition.
Proof. By Lemma 3.4, the function E is nonnegative and nonincreasing on the interval [0, t 0 ]. This implies that, for any t ∈ [0, t 0 ], we have Therefore, v(·) is bounded on the interval [0, t 0 ] and the upper bound only depends on the initial condition. Furthermore, we have Using the triangle inequality and a(0) = c 2 , we have (0) is proved for all t ∈ [0, t 0 ] and a(t) is monotonically increasing with a(0) = c 2 . Thus, the following inequality holds: By the Hölder inequality and using the fact that a(t) is monotonically increasing, we have t 0 λ(s)a(s) ∇Φ(x(s)) ds = t 0 λ(s)a(s)( λ(s)a(s) ∇Φ(x(s)) )ds  The existence of a maximal solution follows from a classical argument relying on the existence and uniqueness of a local solution (see Theorem 2.6). It remains to show that the maximal solution is a global solution; that is, T max = +∞, if λ is absolutely continuous on any finite bounded interval. Indeed, the property of λ guarantees that λ(·) is bounded on the interval [0, T max ). By Lemma 3.5 and the equivalence between the closed-loop control system in Eq. (1.3) and Eq. (1.4) and the first-order system in Eq. (2.3), the solution trajectory x(·) is bounded on the interval [0, T max ) and the upper bound only depends on the initial condition. This implies thatẋ(·) is also bounded on the interval [0, T max ) by considering the system in the autonomous form of Eq. (2.7) and (2.8). Putting these pieces together yields that x(·) is Lipschitz continuous on [0, T max ) and there existsx = lim t→Tmax x(t).

Putting these pieces together yields that
If T max < +∞, the absolute continuity of λ on any finite bounded interval implies that λ(·) is bounded on [0, T max ]. This together with the algebraic equation implies thatx ∈ Ω. However, by Theorem 2.6 with initial datax, we can extend the solution to a strictly larger interval which contradicts the maximality of the aforementioned solution. This completes the proof.

Rate of convergence
We establish a convergence rate for a global solution of the closed-loop control system in Eq. (1.3) and Eq. (1.4).  (1.4). Then, the objective function gap satisfies 2 ). and the squared gradient norm satisfies Remark 3.7 This theorem shows that the convergence rate is O(t −(3p+1)/2 ) in terms of objective function gap and O(t −3p ) in terms of squared gradient norm. Note that the former result does not imply the latter result but only gives a rate of O(t −(3p+1)/2 ) for the squared gradient norm minimization even when ). In fact, the squared gradient norm minimization is generally of independent interest [Nesterov, 2012, Shi et al., 2018, Grapiglia and Nesterov, 2020a and its analysis involves different techniques.
The following lemma is a global version of Lemma 3.4 and the proof is exactly the same. Thus, we only state the result.
is a global solution of the first-order system in Eq. (2.3). Then, we have In view of Lemma 3.8, the key ingredient for analyzing the convergence rate in terms of both the objective function gap and the squared gradient norm is a lower bound on a(t). We summarize this result in the following lemma. Proof. For p = 1, the feedback control law is given by λ(t) = θ, for ∀t ∈ [0, +∞), and For p ≥ 2, the algebraic equation implies that ∇Φ(x(t)) = ( θ 1/p λ(t) ) p p−1 since λ(t) > 0 for ∀t ∈ [0, +∞). This together with Lemma 3.8 implies that By the Hölder inequality, we have t 0 (a(s)) .
Combining these results with the definition of a yields: Since a(t) is nonnegative and nondecreasing with a(0) = c 2 , we have The remaining steps in the proof are based on the Bihari-LaSalle inequality [LaSalle, 1949, Bihari, 1956.
In particular, we denote y(·) by y(t) = This implies thaṫ Integrating this inequality over [0, t] yields: Equivalently, by the definition of y(t), we have This together with Eq. (3.2) yields that This completes the proof.

Discussion
It is useful to compare our approach to approaches based on time scaling [Attouch et al., 2019a[Attouch et al., ,c, 2021a and quasi-gradient methods [Bégout et al., 2015.
Regularity condition. Why is proving the existence and uniqueness of a global solution of the closed-loop control system in Eq. (1.3) and Eq. (1.4) hard without the regularity condition? Our system differs from the existing systems in three respects: (i) the appearance of bothẍ andẋ; (ii) the algebraic equation that links λ and ∇Φ(x); and (iii) the evolution dynamics depends on λ via a anḋ a. From a technical point of view, the combination of these features makes it challenging to control a lower bound on gradient norm ∇Φ(x(·)) or an upper bound on the feedback control λ(·) on the local interval. In sharp contrast, ∇Φ(x(t)) ≥ ∇Φ(x(0)) e −t or λ(t) ≤ λ(0)e t can readily be derived for the Levenberg-Marquardt regularized system in Attouch and Svaiter [2011, Corollary 3.3] and even the closed-loop control systems without inertia in Attouch et al. [2013, Theorem 5.2] and Attouch et al. [2016a, Theorem 2.4]. Thus, we can not exclude the case of λ(t) → +∞ on the bounded interval without the regularity condition and we accordingly fail to establish global existence and uniqueness. We consider it an interesting open problem to derive the regularity condition rather than imposing it as an assumption.
Infinite-dimensional setting. It is promising to study our system using the techniques developed by Attouch et al. [2016b] for an infinite-dimensional setting. Our convergence analysis can in fact be extended directly, yielding the same rate of O(1/t (3p+1)/2 ) in terms of objective function gap and O(1/t 3p ) in terms of squared gradient norm in the Hilbert-space setting. However, the weak convergence of the solution trajectories is another matter. Note that Attouch et al. [2016b] studied the following open-loop system with the parameters (α, β): The condition α > 3 is crucial for proving weak convergence of solution trajectories and establishing strong convergence in various practical situations. Indeed, the convergence of the solution trajectory has not been established so far when α = 3 (except in the one-dimensional case with β = 0; see Attouch et al. [2019b] for the reference). Unfortunately, when c = 0 and p = 1, the closed-loop control system in Eq. (1.3) and Eq. (1.4) becomes The asymptotic damping coefficient 3 t does not satisfy the aforementioned condition in Attouch et al. [2016b], leaving doubt as to whether weak convergence holds true for the closed-loop control system in Eq. (1.3) and Eq. (1.4).
Time scaling. In the context of non-autonomous dissipative systems, time scaling is a simple yet universally powerful tool to accelerate the convergence of solution trajectories [Attouch et al., 2019a[Attouch et al., ,c, 2021a. Considering the general inertial gradient system in Eq. (1.3): x(t) + α(t)ẋ(t) + β(t)∇ 2 Φ(x(t))ẋ(t) + b(t)∇Φ(x(t)) = 0, the effect of time scaling is characterized by the coefficient parameter b(t) which comes in as a factor of ∇Φ(x(t)). In Attouch et al. [2019a,c], the authors conducted an in-depth study of the convergence of this above system without Hessian-driven damping (β = 0). For the case α(t) = α t , the convergence rate turns out to be O( 1 t 2 b(t) ) under certain conditions on the scalar α and b(·). Thus, a clear improvement can be achieved by taking b(t) → +∞. This demonstrates the power and potential of time scaling, as further evidenced by recent work on systems with Hessian damping [Attouch et al., 2021a] and other systems which are associated with the augmented Lagrangian formulation of the affine constrained convex minimization problem [Attouch et al., 2021b].
Comparing to our closed-loop damping approach, the time scaling technique is based on an openloop control regime, and indeed b(t) is chosen by hand. In contrast, λ(t) in our system is determined by the gradient of ∇Φ(x(t)) via the algebraic equation, and the evolution dynamics depend on λ via a andȧ. The time scaling methodology accordingly does not capture the continuous-time interpretation of optimal acceleration in high-order optimization [Monteiro and Svaiter, 2013, Gasnikov et al., 2019, Bubeck et al., 2019. In contrast, our algebraic equation provides a rigorous justification for the large-step condition in the existing algorithms [Monteiro and Svaiter, 2013, Gasnikov et al., 2019, Bubeck et al., 2019 when p ≥ 2 and demonstrates the fundamental role that the feedback control plays in optimal acceleration, a role clarified by the continuous-time perspective.
Quasi-gradient approach and Kurdyka-Lojasiewicz (KL) theory. The quasi-gradient approach to inertial gradient systems were developed in Bégout et al. [2015] and recently applied by  to analyze inertial dynamics with closed-loop control of the velocity. Recall that a vector field F is called a quasi-gradient for a function E if it has the same singular point as E and if the angle between the field F and the gradient ∇E remains acute and bounded away from π 2 (see the references [Huang, 2006, Chergui, 2008, Chill and Fašangová, 2010, Bárta et al., 2012, Bárta and Fašangová, 2016 for further geometrical interpretation).
(3.4) They proposed to use the Hamiltonian formulation of these systems and accordingly defined a function E λ for (x, v) = (x,ẋ(t)) by If φ satisfies some certain growth conditions (see Attouch et al. [2020a, Theorem 7.3 and 9.2]), the systems in Eq. (3.3) and Eq. (3.4) both have a quasi-gradient structure for E η for sufficiently small η > 0. This provides an elegant framework for analyzing the convergence properties of the systems in the form of Eq. (3.3) and Eq. (3.4) with specific damping potentials. Why is analyzing our system hard using the quasi-gradient approach? Our system differs from the systems in Eq. (3.3) and Eq. (3.4) in two aspects: (i) the closed-loop control law is designed for the gradient of Φ rather than the velocityẋ; (ii) the damping coefficients are time dependent, depending on λ via a andȧ, and do not have an analytic form when p ≥ 2. Considering the first-order systems in Eq. (2.3) and Eq. (2.4), we find that F is a time-dependent vector field which can not be tackled by the current quasi-gradient approach. We consider it an interesting open problem to develop a quasi-gradient approach for analyzing our system.

Implicit Time Discretization and Optimal Acceleration
In this section, we propose two conceptual algorithmic frameworks that arise via implicit time discretization of the closed-loop system in Eq. (2.3) and Eq. (2.4). Our approach demonstrates the importance of the large-step condition [Monteiro and Svaiter, 2013] for optimal acceleration, interpreting it as the discretization of the algebraic equation. This allows us to further clarify why this condition is unnecessary for first-order optimization algorithms in the case of p = 1 (the algebraic equation disappears). With an approximate tensor subroutine [Nesterov, 2019], we derive two class of p-th order tensor algorithms, one of which recovers existing optimal p-th order tensor algorithms [Gasnikov et al., 2019, Bubeck et al., 2019 and the other of which leads to a new optimal p-th order tensor algorithm.

Conceptual algorithmic frameworks
We study two conceptual algorithmic frameworks which are derived by implicit time discretization of Eq. (2.3) with c = 0 and Eq. (2.4) with c = 2.
First algorithmic framework. By the definition of a(t), we have (ȧ(t)) 2 = λ(t)a(t) and a(0) = 0. This implies an equivalent formulation of the first-order system in Eq. (2.3) with c = 0 as follows, (4.1)
We propose to solve these two equations inexactly and replace ∇Φ(x k+1 ) by a sufficiently accurate approximation in the first line of Eq. (4.1). In particular, the first equation can be equivalently written in the form of λ k+1 w k+1 + x k+1 −ṽ k = 0, where w k+1 ∈ {∇Φ(x k+1 )}. This motivates us to introduce a relative error tolerance [Solodov andSvaiter, 1999, Monteiro andSvaiter, 2010]. In particular, we define the ε-subdifferential of a function f by and find λ k+1 > 0 and a triple ( To this end, w k+1 is a sufficiently accurate approximation of ∇Φ(x k+1 ). Moreover, the second equation can be relaxed to λ k+1 x k+1 −ṽ k p−1 ≥ θ.
Remark 4.1 We present our first conceptual algorithmic framework formally in Algorithm 1. This scheme includes the large-step A-HPE framework [Monteiro and Svaiter, 2013] as a special instance. Indeed, it reduces to the large-step A-HPE framework if we set y =ỹ and p = 2 and change the notation of (x, v,ṽ, w) to (y, x,x, v) in Monteiro and Svaiter [2013].
Remark 4.2 We present our second conceptual algorithmic framework formally in Algorithm 2. To the best of our knowledge, this scheme does not appear in the literature and is based on an estimate sequence which differs from the one used in Algorithm 1. However, from a continuous-time perspective, these two algorithms are equivalent up to a constant c > 0, demonstrating that they achieve the same convergence rate in terms of both objective function gap and squared gradient norm.
Comparison with Güler's accelerated proximal point algorithm. Algorithm 2 is related to Güler's accelerated proximal point algorithm (APPA) [Güler, 1992], which combines Nesterov acceleration [Nesterov, 1983] and Martinet's PPA [Martinet, 1970[Martinet, , 1972. Indeed, the analogs of update formulasṽ k = (1 − α k+1 )x k + α k+1 v k and (α k+1 ) 2 = λ k+1 (1 − α k+1 )γ k appear in Güler's algorithm, suggesting similar evolution dynamics. However, Güler's APPA does not specify how to choose {λ k } k≥0 but regard them as the parameters, while our algorithm links its choice with the gradient norm of Φ via the large-step condition. Such difference is emphasized by recent studies on the continuous-time perspective of Güler's APPA [Attouch et al., 2019c,a]. More specifically, Attouch et al. [2019a] proved that Güler's APPA can be interpreted as the implicit time discretization of an open-loop inertial gradient system (see Attouch et al. [2019a, Eq. (53) where g k and β k in their notation correspond to α k and λ k in Algorithm 2. By using γ k+1 −γ k = −α k+1 γ k and standard continuous-time arguments, we have g(t) = −γ (t) γ(t) and β(t) = λ(t) = (γ(t)) 2 (γ(t)) 3 . By further defining a(t) = 1 γ(t) , the above system is in the form of where a explicitly depends on the variable λ as follows, Compared to our closed-loop control system, the one in Eq. (4.4) is open-loop without the algebra equation and does not contain Hessian-driven damping. The coefficient for the gradient term is also different, standing for different time rescaling in the evolution dynamics [Attouch et al., 2021a].

Complexity analysis
We study the iteration complexity of Algorithm 1 and 2. Our analysis is largely motivated by the aforementioned continuous-time analysis, simplifying the analysis in Monteiro and Svaiter [2013] for the case of p = 2 and generalizing it to the case of p > 2 in a systematic manner (see Theorem 4.3 and Theorem 4.6). Throughout this subsection, x ⋆ denotes the projection of v 0 onto the solution set of Φ.
Algorithm 1. We start with the presentation of our main results for Algorithm 1, which in fact generalizes Monteiro and Svaiter [2013, Theorem 4.1] to the case of p > 2.
Theorem 4.3 For every integer k ≥ 1, the objective function gap satisfies Note that the only difference between Algorithm 1 and large-step A-HPE framework in Monteiro and Svaiter [2013] is the order in the algebraic equation. As such, many of the technical results derived in Monteiro and Svaiter [2013] also hold for Algorithm 1; more specifically, Monteiro and Svaiter [2013, Theorem 3.6, Lemma 3.7 and Proposition 3.9]. We also present a technical lemma that provides a lower bound for A k .
Lemma 4.4 For p ≥ 1 and every integer k ≥ 1, we have Proof. For p = 1, the large-step condition implies that λ k ≥ θ for all k ≥ 0. By Monteiro and Svaiter [2013, Lemma 3.7], we have A k ≥ θk 2 4 . For p ≥ 2, the large-step condition implies that Monteiro and Svaiter [2013, Theorem 3.6] ≤ By the Hölder inequality, we have For the ease of presentation, we define C . Putting these pieces together yields: Monteiro and Svaiter [2013, Lemma 3.7 The remaining proof is based on the Bihari-LaSalle inequality in discrete time. In particular, we define . Then, y 0 = 0 and Eq. (4.5) implies that This implies that .
(4.6) Inspired by the continuous-time inequality in Lemma 4.4, we claim that the following discrete-time inequality holds for every integer k ≥ 1: . (4.7) Indeed, we define g(t) = 1 − t 2 p+1 and find that this function is convex for ∀t ∈ (0, 1) since p ≥ 1. Thus, we have Since y k is increasing, we have y k−1 y k ∈ (0, 1). Then, the desired Eq. (4.6) follows from setting t = y k−1 y k . Combining Eq. (4.6) and Eq. (4.7) yields that . Therefore, we conclude that By the definition of y k , we have This together with Eq. (4.5) yields that This completes the proof.
Remark 4.5 The proof of Lemma 4.4 is much simpler than the existing analysis; e.g., Monteiro and Svaiter [2013, Lemma 4.2] Combining Monteiro and Svaiter [2013, Proposition 3.9] and Lemma 4.4, we have In addition, we have This completes the proof.
Algorithm 2. We now present our main results for Algorithm 2. The proof is analogous to that of Theorem 4.3 and based on another estimate sequence.
Theorem 4.6 For every integer k ≥ 1, the objective function gap satisfies Inspired by the continuous-time Lyapunov function in Eq. (3.1), we construct a discrete-time Lypanunov function for Algorithm 2 as follows: (4.8) We use this function to prove technical results that pertain to Algorithm 2 and which are the analogs of Monteiro and Svaiter [2013, Theorem 3.6, Lemma 3.7 and Proposition 3.9].
Lemma 4.7 For every integer k ≥ 1, Proof. It suffices to prove the first inequality which implies the other results. Based on the discrete-time Lyapunov function, we define two functions φ k : R d → R and Γ k : R d → R by (Γ k is related to E k and defined recursively): ) using induction. Indeed, it holds when k = 0 since γ 0 = 1. Assuming that this inequality holds for ∀i ≤ k, we derive from Finally, we prove that v k = argmin v∈R d Γ k (v) using the induction. Indeed, it holds when k = 0. Suppose that this inequality holds for ∀i ≤ k, we have Using the definition of v k and the fact that The remaining proof is based on the gap sequence {β k } k≥0 which is defined by ). Using the previous facts that Γ k is quadratic with ∇ 2 Γ k = 1 and the upper bound for Γ k (v), we have By definition, we have β 0 = 0. Thus, it suffices to prove that the following recursive inequality holds true for every integer k ≥ 0, (4.9) In particular, we defineṽ = (1 − α k+1 )x k + α k+1 v for any given v ∈ R d . Using the definition ofṽ k and the affinity of φ k+1 , we have Plugging this into the recursive equation for Γ k yields that By the definition of β k , we have Γ k (v k ) = β k + 1 γ k (Φ(x k ) − Φ(x ⋆ )). Putting these pieces together with the definition of E k yields that Eq. (4.10) Using Monteiro and Svaiter [2013, Lemma 3.3] with λ = λ k+1 ,ṽ =ṽ k ,x = x k+1 ,w = w k+1 and ǫ = ǫ k+1 , we have which implies that Putting these pieces together yields that which together with the definition of β k yields the desired inequality in Eq. (4.9). This completes the proof.
Lemma 4.9 For every integer k ≥ 1 and σ < 1, there exists 1 ≤ i ≤ k such that Proof. With the convention 0/0 = 0, we define τ k = max{ 2ǫ k σ 2 , λ k w k 2 (1+σ) 2 } for every integer k ≥ 1. Then, we have which implies that λ k τ k ≤ x k −ṽ k−1 2 for every integer k ≥ 1. This together with Lemma 4.7 yields that Combining this inequality with the definition of τ k yields the desired results.
As the analog of Lemma 4.4, we provide a technical lemma on the upper bound for γ k . The analysis is based on the same idea for proving Lemma 4.4 and is motivated by continuous-time analysis for the first-order system in Eq. (2.4).
Lemma 4.10 For p ≥ 1 and every integer k ≥ 1, we have Proof. For p = 1, the large-step condition implies that λ k ≥ θ for all k ≥ 0. By Lemma 4.8, we have γ k ≤ 4 θk 2 . For p ≥ 2, the large-step condition implies that By the Hölder inequality, we have For ease of presentation, we define C = θ − 2 3p+1 ( 2E 0 1−σ 2 ) p−1 3p+1 . Putting these pieces together yields that (4.12) Using the same argument for proving Lemma 4.4, we have This together with Eq. (4.12) yields that This completes the proof.
Proof of Theorem 4.6: For every integer k ≥ 1, by Lemma 4.7 and Lemma 4.10, we have By Lemma 4.9 and Lemma 4.10, we have As in the proof of Theorem 4.3, we conclude that inf 1≤i≤k w i 2 = O(k −3p ). This completes the proof.
Remark 4.11 The discrete-time analysis in this subsection is based on a discrete-time Lyapunov function in Eq. (4.8), which is closely related to the continuous one in Eq. (3.1), and two simple yet nontrivial technical lemmas (see Lemma 4.4 and 4.10), which are both discrete-time versions of Lemma 3.9. Notably, the proofs of Lemma 4.4 and 4.10 follows the same path for proving Lemma 3.9 and have demanded the use of the Bihari-LaSalle inequality in discrete time.

Optimal tensor algorithms and gradient norm minimization
By instantiating Algorithm 1 and 2 with approximate tensor subroutines, we develop two families of optimal p-th order tensor algorithms for minimizing the function Φ ∈ F p ℓ (R d ). The former one include all of existing optimal p-th order tensor algorithms [Gasnikov et al., 2019, Bubeck et al., 2019 while the latter one is new to our knowledge. We also provide one hitherto unknown result that the optimal p-th order tensor algorithms in this section minimize the squared gradient norm at a rate of O(k −3p ). The results extend those for the optimal first-order and second-order algorithms that have been obtained in Shi et al. [2018] and Monteiro and Svaiter [2013].
First algorithm. We present the first optimal p-th order tensor algorithm in Algorithm 3 and prove that it is Algorithm 1 with specific choice of θ.
In view of Proposition 4.12, the iteration complexity derived for Algorithm 1 hold for Algorithm 3. We summarize the results in the following theorem.
Theorem 4.13 For every integer k ≥ 1, the objective function gap satisfies 2 ), and the squared gradient norm satisfies Remark 4.14 Theorem 4.13 has been derived in Monteiro and Svaiter [2013, Theorem 6.4] for the special case of p = 2, and a similar result for Nesterov's accelerated gradient descent (the special case of p = 1) has also been derived in Shi et al. [2018]. For p ≥ 3 in general, the first inequality on the objective function gap has been derived independently in Gasnikov  Second algorithm. We present the second optimal p-th order tensor algorithm in Algorithm 4 which is Algorithm 2 with specific choice of θ. The proof is omitted since it is the same as the aforementioned analysis for Algorithm 3.
Remark 4.17 The approximate tensor subroutine in Algorithm 3 and 4 can be efficiently implemented usinga novel bisection search scheme. We refer the interested readers to  and Bubeck et al. [2019] for the details.

Conclusions
We have presented a closed-loop control system for modeling optimal tensor algorithms for smooth convex optimization and provided continuous-time and discrete-time Lyapunov functions for analyzing the convergence properties of this system and its discretization. Our framework provides a systematic way to derive discrete-time p-th order optimal tensor algorithms, for p ≥ 2, and simplify existing analyses via the use of a Lyapunov function. A key ingredient in our framework is the algebraic equation, which is not present in the setting of p = 1, but is essential for deriving optimal acceleration methods for p ≥ 2. Our framework allows us to infer that a certain class of p-th order tensor algorithms minimize the squared norm of the gradient at a fast rate of O(k −3p ) for smooth convex functions.
It is worth noting that one could also consider closed-loop feedback control of the velocity. This is called nonlinear damping in the PDE literature; see  for recent progress in this direction. There are also several other avenues for future research. In particular, it is of interest to bring our perspective into register with the Lagrangian and Hamiltonian frameworks that have proved productive in recent work [Wibisono et al., 2016, Diakonikolas and Jordan, 2020, Muehlebach and Jordan, 2021, França et al., 2021 and the control-theoretic viewpoint of Lessard et al. [2016] and Hu and Lessard [2017]. We would hope for this study to provide additional insight into the geometric or dynamical role played by the algebraic equation for modeling the continuous-time dynamics. Moreover, we wish to study possible extensions of our framework to nonsmooth optimization by using differential inclusions Vassilis et al. [2018] and monotone inclusions. The idea is to consider the setting in which 0 ∈ T (x) where T is a maximally monotone operator in a Hilbert space [Alvarez and Attouch, 2001, Attouch and Svaiter, 2011, Maingé, 2013, Attouch et al., 2013, Abbas et al., 2014, Attouch et al., 2016a, Bot and Csetnek, 2016, Attouch and Cabot, 2018, Attouch and László, 2020b. Finally, given that we know that direct discretization of our closed-loop control system cannot recover Nesterov's optimal high-order tensor algorithms [Nesterov, 2018, Section 4.3], it is of interest to investigate the continuous-time limit of Nesterov's algorithms and see whether the algebraic equation plays a role in their analysis.