A software framework for embedded nonlinear model predictive control using a gradient-based augmented Lagrangian approach (GRAMPC)

A nonlinear MPC framework is presented that is suitable for dynamical systems with sampling times in the (sub)millisecond range and that allows for an efficient implementation on embedded hardware. The algorithm is based on an augmented Lagrangian formulation with a tailored gradient method for the inner minimization problem. The algorithm is implemented in the software framework GRAMPC and is a fundamental revision of an earlier version. Detailed performance results are presented for a test set of benchmark problems and in comparison to other nonlinear MPC packages. In addition, runtime results and memory requirements for GRAMPC on ECU level demonstrate its applicability on embedded hardware.


Introduction
Model predictive control (MPC) is one of the most popular advanced control methods due to its ability to handle linear and nonlinear systems with constraints and multiple inputs.The well-known challenge of MPC is the numerical effort that is required to solve the underlying optimal control problem (OCP) online.In the recent past, however, the methodological as well as algorithmic development of MPC for linear and nonlinear systems has matured to a point that MPC nowadays can be applied to highly dynamical problems in real-time.Most approaches of real-time MPC either rely on suboptimal solution strategies [56,14,23] and/or use tailored optimization algorithms to optimize the computational efficiency.
Particularly for linear systems, the MPC problem can be reduced to a quadratic problem, for which the optimal control over the admissible polyhedral set can be precomputed.This results in an explicit MPC strategy with minimal computational effort for the online implementation [4,5], though this approach is typically limited to a small number of state and control variables.An alternative to explicit MPC is the online active set method [17] that takes advantage of the receding horizon property of MPC in the sense that typically only a small number of constraints becomes active or inactive from one MPC step to the next.
In contrast to active set strategies, interior point methods relax the complementary conditions for the constraints and therefore solve a relaxed set of optimality conditions in the interior of the admissable constraint set.The MPC software packages FORCES (PRO) [15] and fast mpc [60] employ interior point methods for linear MPC problems.An alternative to active set and interior point methods are accelerated gradient methods [52] that originally go back to Nesterov's fast gradient method for convex problems [48].A corresponding software package for linear MPC is FiOrdOs [32].
For nonlinear systems, one of the first real-time MPC algorithms was the continuation/GMRES method [50] that solves the optimality conditions of the underlying optimal control problem based on a continuation strategy.The well-known ACADO Toolkit [30] uses the above-mentioned active set strategy in combination with a real-time iteration scheme to efficiently solve nonlinear MPC problems.Another recently presented MPC toolkit is VIATOC [33] that employs a projected gradient method to solve the time-discretized, linearized MPC problem.
Besides the real-time aspect of MPC, a current focus of research is on embedded MPC, i.e. the neat integration of MPC on embedded hardware with limited ressources.This might be field programmable gate arrays (FPGA) [44,38,29], programmable logic controllers (PLC) in standard automation systems [42,37] or electronic control units (ECU) in automative applications [46].For embedded MPC, several challenges arise in addition to the real-time demand.For instance, numerical robustness even at low computational accuracy and tolerance against infeasibility are important aspects as well as the ability to provide fast, possibly suboptimal iterates with minimal computational demand.In this regard, it is also desirable to satisfy the system dynamics in the single MPC iterations in order to maintain dynamically consistent iterates.Low code complexity for portability and a small memory footprint are further important aspects to allow for an efficient implementation of MPC on embedded hardware.The latter aspects, in particular, require the usage of streamlined, self-contained code rather than highly complex MPC algorithms.To meet these challenges, gradient-based algorithms become popular choices due to their general simplicity and low computational complexity [20,41,47].
Following this motivation, this paper presents a software framework for nonlinear MPC that can be efficiently used for embedded control of nonlinear and highly dynamical systems with sampling times in the (sub)millisecond range.The presented framework is a fundamental revision of the MPC toolbox GRAMPC [36] (Gradient-Based MPC -[graemp si:]) that was originally developed for nonlinear systems with pure input constraints.The revised algorithm of GRAMPC presented in this paper allows to account for general nonlinear equality and inequality constraints, as well as terminal constraints.Beside "classical" MPC, the toolbox can be applied to MPC on shrinking horizon, general optimal control problems, moving horizon estimation and parameter optimization problems including free end time problems.The new algorithm is based on an augmented Lagrangian formulation in connection with a real-time gradient method and tailored line search and multiplier update strategies that are optimized for a time and memory efficient implementation on embedded hardware.The performance and effectiveness of augmented Lagrangian methods for embedded nonlinear MPC was recently demonstrated for various application examples on rapid prototyping and ECU hardware level [28,46,16].Beside the presentation of the augmented Lagrangian algorithm and the general usage of GRAMPC, the paper compares its performance to the nonlinear MPC toolkits ACADO and VIATOC for different benchmark problems.Moreover, runtime results are presented for GRAMPC on dSPACE and ECU level including its memory footprint to demonstrate its applicability on embedded hardware.
The paper is organized as follows.Section 2 presents the general problem formulation and exemplarily illustrates its application to model predictive control and moving horizon estimation.Section 3 describes the augmented Lagrangian framework in combination with a gradient method for the inner minimization problem.Section 4 gives an overview on the structure and usage of GRAMPC.Section 5 evaluates the performance of GRAMPC for different benchmark problems and in comparison to ACADO and VIATOC, before Section 6 closes the paper.Some norms are used inside the paper, in particular in Section 3. The Euclidean norm of a vector x ∈ R n is denoted by x 2 , the weighted quadratic norm by x Q = (x T Qx) 1/2 for some positive definite matrix Q, and the scalar product of two vectors x, y ∈ R n is defined as x, y = x T y.For a time function x(t), t ∈ [0, T ] with T < ∞, the vector-valued L 2 -norm is defined by The inner product is denoted by x, y = T 0 x T (t)y(t) dt using the same (overloaded) -notation as in the vector case.Moreover, function arguments (such as time t) might be omitted in the text for the sake of enhancing readability.

Problem formulation
This section describes the class of optimal control problems that can be solved by GRAMPC.The framework is especially suitable for model predictive control and moving horizon estimation, as the numerical solution method is tailored to embedded applications.Nevertheless, GRAMPC can be used to solve general optimal control problems or parameter optimization problems as well.

Optimal control problem
GRAMPC solves nonlinear constrained optimal control problems with fixed or free end time and potentially unknown parameters.Consequently, the most generic problem formulation that can be adressed by GRAMPC is given by min u,p,T J(u, p, T ; The cost to be minimized (1a) consists of the terminal and integral cost functions The dynamics (1b) are given in semi-implicit form with the (constant) mass matrix M , the nonlinear system function f : R N , and the initial state x 0 .The system class (1b) includes standard ordinary differential equations for M = I as well as (index-1) differential-algebraic equations with singular mass matrix M .In addition, (1c) and (1d) account for equality and inequality constraints g : R as well as for the corresponding terminal constraints g T : R Finally, (1e) and (1f) represent box constraints for the optimization variables u = u(t), p and T (if applicable).
In comparison to the previous version of the GRAMPC toolbox [36], the problem formulation (1) supports optimization with respect to parameters, a free end time, general state-dependent equality and inequality constraints, as well as terminal constraints.Furthermore, semi-implicit dynamics with a constant mass matrix M can be handled using the Rosenbrock solver RODAS [27] as numerical integrator.This extends the range of possible applications besides MPC to general optimal control, parameter optimization, and moving horizon estimation.However, the primary target is embedded model predictive control of nonlinear systems, as the numerical solution algorithm is optimized for time and memory efficiency.

Application to model predictive control
Model predictive control relies on the iterative solution of an optimal control problem of the form with the MPC-internal time coordinate τ ∈ [0, T ] over the prediction horizon T .The initial state value x k is the measured or estimated system state at the current sampling instant t k = t 0 + k∆t, k ∈ N with sampling time 0 < ∆t ≤ T .The first part of the computed control trajectory u(τ ), τ ∈ [0, ∆t) is used as control input for the actual plant over the time interval t ∈ [t k , t k+1 ), before OCP (2) is solved again with the new initial state x k+1 .A popular choice of the cost functional (2a) is the quadratic form with the desired setpoint (x des , u des ) and the positive (semi-)definite matrices P , Q, R. Stability is often ensured in MPC by imposing a terminal constraint x(T ) ∈ Ω β , where the set ≤ β} for some β > 0 is defined in terms of the terminal cost V (x) that can be computed from solving a Lyapunov or Riccati equation that renders the set Ω β invariant under a local feedback law [11,45].In view of the OCP formulation (1), the terminal region as well as general box constraints on the state as given in (2c) can be expressed as Note, however, that terminal constraints are often omitted in embedded or realtime MPC in order to minimize the computational effort [43,22,24].In particular, real-time feasibility is typically achieved by limiting the number of iterations per sampling step and using the current solution for warm starting in the next MPC step, in order to incrementally reduce the suboptimality over the runtime of the MPC [14,22,21].An alternative to the "classical" MPC formulation ( 2) is shrinking horizon MPC, see e.g.[13,57,25], where the horizon length T is shortened over the MPC steps.This can be achieved by formulating the underlying OCP (2) as a free end time problem with a terminal constraint g T (x(T )) = x(T ) − x des = 0 to ensure that a desired setpoint x des is reached in finite time instead of the asymptotic behavior of fixed-horizon MPC.

Application to moving horizon estimation
Moving horizon estimation (MHE) can be seen as the dual of MPC for state estimation problems.Similar to MPC, MHE relies on the online solution of a dynamic optimization problem of the form that depends on the history of the control u(t) and measured output y(t) over the past time window [t k − T, t k ].The solution of (5) yields the estimate xk of the current state x k such that the estimated output function (5c) best matches the measured output y(t) over the past horizon T .Further constraints can be added to the formulation of (5) to incorporate a priori knowledge.GRAMPC can be used for moving horizon estimation by handling the system state at the beginning of the estimation horizon as optimization variables, i.e. p = x(t k −T ).In addition, a time transformation is required to map t ∈ [t k −T, t k ] to the new time coordinate τ ∈ [0, T ] along with the corresponding coordinate transformation with the initial condition x(0) = 0.The optimization problem (5) then becomes The solution p = x(t k − T ) of ( 7) and the coordinate transformation (6) are used to compute the current state estimate with where x(T ) is the end point of the state trajectory returned by GRAMPC.In the next sampling step, the parameters can be re-initialized with the predicted estimate x(t k − T + ∆t) = p + x(∆t).Note that the above time transformation can alternatively be reversed in order to directly estimate the current state p = x(t k ).This, however, requires the reverse time integration of the dynamics, which is numerically unstable if the system is stable in forward time.Vice versa, a reverse time transformation is to be preferred for MHE of an unstable process.

Optimization algorithm
The optimization algorithm underlying GRAMPC uses an augmented Lagrangian formulation in combination with a real-time projected gradient method.Though SQP or interior point methods are typically superior in terms of convergence speed and accuracy, the augmented Lagrangian framework is able to rapidly provide a suboptimal solution at low computational costs, which is important in view of realtime applications and embedded optimization.In the following, the augmented Lagrangian formulation and the corresponding optimization algorithm are described for solving OCP (1).The algorithm follows a first-optimize-then-discretize approach in order to maintain the dynamical system structure in the optimality conditions, before numerical integration is applied.

Augmented Lagrangian formulation
The basic idea of augmented Lagrangian methods is to replace the original optimization problem by its dual problem, see for example [8,49,9] as well as [19,31,6,1] for corresponding approaches in optimal control and function space settings.
The augmented Lagrangian formulation adjoins the constraints (1c), (1d) to the cost functional (1a) by means of multipliers μ = (µ g , µ h , µ g T , µ h T ) and additional quadratic penalty terms with the penalty parameters c = (c g , c h , c g T , c h T ).A standard approach in augmented Lagrangian theory is to transform the inequalities (1d) into equality constraints by means of slack variables, which can be analytically solved for [8].This leads to the overall set of equality constraints (see Appendix A for details) with the transformed inequalities and the the diagonal matrix syntax C = diag(c).The vector-valued max-function is to be understood component-wise.The equalities (9a) are adjoined to the cost functional J(u, p, T, μ, c; x 0 ) = V (x, p, T, µ T , c T ) + T 0 l(x, u, p, t, µ, c) dt (11) with the augmented terminal and integral cost terms and the stacked penalty and multiplier vectors Note that the multipliers µ = [µ T g , µ T h ] T corresponding to the constraints (1d), respectively (10), are functions of time t.In the implementation of GRAMPC, the corresponding penalties c g and c h are handled time-dependently as well.
Strong duality typically relies on convexity, which is difficult to investigate for general constrained nonlinear optimization problems.However, the augmented Lagrangian formulation is favorable in this regard, as the duality gap that may occur for unpenalized, nonconvex Lagrangian formulations can potentially be closed by the augmented Lagrangian formulation [53].
The motivation behind the algorithm presented in the following lines is to solve the dual problem (13) instead of the original one (1) by approaching the saddle-point ( 14) from both sides.In essence, the max-min-problem ( 13) is solved in an alternating manner by performing the inner minimization with a projected gradient method and the outer maximization via a steepest ascent approach.Note that the dynamics (13b) are captured inside the minimization problem instead of treating the dynamics as equality constraints of the form M ẋ − f (x, u, p, t) = 0 in the augmented Lagrangian [26].This ensures the dynamical consistency of the computed trajectories in each iteration of the algorithm, which is important for an embedded, possibly suboptimal implementation.

Structure of the augmented Lagrangian algorithm
The basic iteration structure of the augmented Lagrangian algorithm is summarized in Algorithm 1 and will be detailed in the Sections 3.3 to 3.5.The initialization of the algorithm concerns the multipliers μ1 and penalties c1 as well as the definition of several tolerance values that are used for the convergence check (Section 3.4) and the update of the multipliers and penalties in (17) and (18), respectively.
In the current augmented Lagrangian iteration i, the inner minimization is carried out by solving the OCP (15) for the current set of multipliers μi and penalties ci .Since the only remaining constraints within (15) are box constraints on the optimization variables (15c) and (15d), the problem can be efficiently solved by the projected gradient method described in Section 3.3.The solution of the minimization step consists of the control vector u i and of the parameters p i and free end time T i , if these are specified in the problem at hand.The subsequent convergence check in Algorithm 1 rates the constraint violation as well as convergence behavior of the previous minimization step and is detailed in Section 3.4.
If convergence is not reached yet, the multipliers and penalties are updated for the next iteration of the algorithm, as detailed in Section 3.5.Note that the penalty update in (18) relies on the last two iterates of the constraint functions ( 16).In the initial iteration i = 1 and if GRAMPC is used within an MPC setting, the constraint functions g 0 , h 0 , g 0 T , h 0 T are warm-started by the corresponding last iterations of the previous MPC run.Otherwise, the penalty update is started in iteration i = 2.

Initialization
-Initialize multipliers μ0 and penalties c0 -Set tolerances (εg > 0, -Store constraint functions ) according to (see Section 3.5) end Note that if the end time T is treated as optimization variable as shown in Algorithm 1, the evaluation of ( 18) would formally require to redefine the constraint functions g i−1 (t) and h i−1 (t), t ∈ [0, T i−1 ] from the previous iterations to the new horizon length T i by either shrinkage or extension.This redefinition is not explicitly stated in Algorithm 1, since the actual implementation of GRAMPC stores the trajectories in discretized form, which implies that only the discretized time vector must be recomputed, once the end time T i is updated.

Gradient algorithm for inner minimization problem
The OCP (15) inside the augmented Lagrangian algorithm corresponds to the inner minimization problem of the max-min-formulation ( 13) for the current iterates of the multipliers μi and ci .A projected gradient method is used to solve OCP (15) to a desired accuracy or for a fixed number of iterations.
The gradient algorithm relies on the solution of the first-order optimality conditions defined in terms of the Hamiltonian with the adjoint states λ ∈ R N x .In particular, the gradient algorithm iteratively solves the canonical equations, see e.g.[10], consisting of the original dynamics (20a) and the adjoint dynamics (20b) in forward and backward time and computes a gradient update for the control in order to minimize the Hamiltonian in correspondence with Pontryagin's Maximum Principle [40,7], i.e. min If parameters p and/or the end time T are additional optimization variables, the corresponding gradients have to be computed as well.Algorithm 2 lists the overall projected gradient algorithm for the full optimization case, i.e. for the optimization variables (u, p, T ), for the sake of completeness.
Algorithm 2 Projected gradient algorithm (inner minimization) ] by solving (26) for j = 0 -Set step size adaptation factors γp > 0, γ T > 0 -Compute gradients with -Compute step size α i|j by solving the line search problem -Update (u i|j+1 , p i|j+1 , T i|j+1 ) according to -Evaluate convergence criterion - The gradient algorithm is initialized with an initial control u i|1 (t) and initial parameters p i|1 and time length T i|1 .In case of MPC, these initial values are taken from the last sampling step using a warmstart strategy with an optional time shift in order to compensate for the horizon shift by the sampling time ∆t.
The algorithm starts in iteration j = 1 with computing the corresponding state trajectory x i|j (t) as well as the adjoint state trajectory λ i|j (t) by integrating the adjoint dynamics (22) in reverse time.In the next step, the gradients ( 23) are computed and the step size α i|j > 0 is determined from the line search problem (24).The projection functions ψ u (u), ψ p (p), and ψ T (T ) project the inputs, the parameters, and the end time onto the feasible sets (15c) and (15d).For instance, The next steps in Algorithm 2 are the updates (25) of the control u i|j+1 , parameters p i|j+1 , and end time T i|j+1 as well as the update of the state trajectory x i|j+1 in (26).
The convergence measure η i|j+1 in ( 27) rates the relative gradient changes of u, p, and T .If the gradient scheme converges in the sense of η i|j+1 ≤ ε rel,c with threshold ε rel,c > 0 or if the maximum number of iterations j max is reached, the algorithm terminates and returns the last solution to Algorithm 1. Otherwise, j is incremented and the gradient iteration continues.
An important component of Algorithm 2 is the line search problem (24), which is performed in all search directions simultaneously.The scaling factors γ p and γ T can be used to scale the step sizes relative to each other, if they are not of the same order of magnitude or if the parameter or end time optimization is highly sensitive.GRAMPC implements two different line search strategies, an adaptive and an explicit one, in order to solve (24) in an accurate and robust manner without involving too much computational load.
The adaptive strategy evaluates the cost functional (24) for three different step sizes, i.e. (α i , Ji ), i = 1, 2, 3 with α 1 < α 2 < α 3 , in order to compute a polynomial fitting function Φ(α) of the form where the constants p i are computed from the test points (α i , Ji ), i = 1, 2, 3.The approximate step size can then be analytically derived by solving The interval [α 1 , α 3 ] is adapted in the next gradient iteration, if the step size α j is close to the interval's borders, see [36] for more details.Depending on the OCP or MPC problem at hand, the adaptive line search method may not be suited for time-critical applications, since the approximation of the cost function (29) requires to integrate the system dynamics (15b) and the cost integral (15a) three times.This computational load can be further reduced by an explicit line search strategy, originally discussed in [3] and adapted to the optimal control case in [35].Motivated by the secant equation in quasi-Newton methods [3], this strategy minimizes the difference between two updates of the optimization variables (u, p, T ) for the same step size and without considering the corresponding box constraints (15c) and (15d).The explicit method solves the problem where ∆ denotes the difference between the last and current iterate, e.g.∆u i|j = u i|j − u i|j−1 .The analytic solution is given by Alternatively, the minimization 2 can be carried out, similar to (31), leading to the corresponding solution Both explicit formulas ( 32) and ( 33) are implemented in GRAMPC as an alternative to the adaptive line search strategy.

Convergence criterion
The gradient scheme in Algorithm 2 solves the inner minimization problem (15) of the augmented Lagrangian algorithm and returns the solution (u i , p i , T i ) as well as the maximum relative gradient η i that is computed in (27) and used to check convergence inside the gradient algorithm.The outer augmented Lagrangian iteration in Algorithm 1 also uses this criterion along with the convergence check of the constraints, i.e.
The thresholds ε g , ε gT , ε h , ε hT are vector-valued to rate each constraint individually.If the maximum number of augmented Lagrangian iterations is reached, i.e. i = i max , the algorithm terminates in order to ensure real-time feasibility.Otherwise, i is incremented and the next minimization of problem ( 15) is carried out.

Update of multipliers and penalties
The multiplier update (17) in Algorithm 1 is carried out via a steepest ascent approach, whereas the penalty update ( 18) uses an adaptation strategy that rates the progress of the last two iterates.In more detail, the multiplier update function for a single equality constraint g i := g(x i , u i , p i , t) is defined by The steepest ascent direction is the residual of the constraint g i with the penalty c i g serving as step size parameter.The additional damping factor 0 ≤ ρ ≤ 1 is introduced to increase the robustness of the multiplier update in the augmented Lagrangian scheme.In the GRAMPC implementation, the multipliers are additionally limited by an upper bound µ max , in order to avoid unlimited growth and numerical stability issues.The update (35) is skipped if the constraint is satisfied within the tolerance ε g or if the gradient method is not sufficiently converged, which is checked by the maximum relative gradient η i , cf. (27), and the update threshold ε rel,u > 0. GRAMPC uses a larger value for ε rel,u than the convergence threshold ε rel,c of the gradient scheme in Algorithm 2. This accounts for the case that the gradient algorithm might not have converged to the desired tolerance before the maximum number of iterations j max is reached, e.g. in real-time MPC applications, where only one or two gradient iterations might be applied.In this case, ε rel,u ε rel,c ensures that the multiplier update is still performed provided that at least "some" convergence was reached by the inner minimization.
The penalty c i g corresponding to the equality constraint g i is updated according to the heuristic update function that is basically motivated by the LANCELOT package [12,49].The penalty c i g is increased by the factor β in ≥ 1, if the last gradient scheme converged, i.e. η i ≤ ε rel,u , but insufficient progress (rated by γ in > 0) was made by the constraint violation compared to the previous iteration i − 1.The penalty is decreased by the factor β de ≤ 1 if the constraint g i is sufficiently satisfied within its tolerance with 0 < γ de < 1.The setting β de = β in = 1 can be used to keep c i g constant.Similar to the multiplier update (35), GRAMPC restricts the penalty to upper and lower bounds c min ≤ c i g ≤ c max , in order to avoid negligible values as well as unlimited growth of c i g .The lower penalty bound c min is particularly relevant in case of MPC applications, where typically only a few iterations are performed in each MPC step.GRAMPC provides a support function that computes an estimate of c min for the MPC problem at hand.The updates (35) and (36) define the vector functions ζ g , ζ g T and ξ g , ξ g T in (17a) and (18a) with N g and N g T components, corresponding to the number of equality and terminal equality constraints.Note that the multipliers for the equality and inequality constraints are time-dependent, i.e. µ g (t) and µ h (t), which implies that the functions ζ g and ξ g , resp.( 35) and (36), are evaluated pointwise in time.
The inequality constrained case is handled in a similar spirit.For a single inequality constraint hi := h(x i , u i , p i , t, µ h , c h ), cf.(10a), the multiplier and penalty updates are defined by and (38) and constitute the vector functions ζ h , ζ h T and ξ h , ξ h T in (17b) and (18b) with N h and N h T components.The condition hi < 0 in (37) ensures that the Lagrangian multiplier µ i h is reduced for inactive constraints, which corresponds to either h i < 0 or − µ h c h in view of the transformation (10).

Structure and usage of GRAMPC
This section describes the framework of GRAMPC and illustrates its general usage.GRAMPC is designed to be portable and executable on different operating systems and hardware without the use of external libraries.The code is implemented in plain C with a user-friendly interface to C++, Matlab/Simulink, and dSpace.The following lines give an overview of GRAMPC and demonstrate how to implement and solve a problem.

General structure
Figure 1 shows the general structure of GRAMPC and the steps that are necessary to compile an executable GRAMPC project.The first step in creating a new project is to define the problem using the provided C function templates, which will be detailed more in Section 4.2.The user has the possibility to set problem specific parameters and algorithmic options concerning the numerical integrations in the gradient algorithm, the line search strategy as well as further preferences, also see Section 4.3.
A specific problem can be parameterized and numerically solved using C/C++, Matlab/Simulink, or dSpace.A closer look on this functionality is given in Figure 2. The workspace of a GRAMPC project as well as algorithmic options and parameters are stored by the structure variable grampc.Several parameter settings are problem-specific and need to be provided, whereas other values are set to default values.A generic interface allows one to manipulate the grampc structure in order to set algorithmic options or parameters for the problem at hand.The functionalities of GRAMPC can be manipulated from Matlab/Simulink by means of mex routines that are wrappers for the corresponding C functions.This allow Fig. 1: General structure of GRAMPC (grey: C code, white: Matlab level).
one to run a GRAMPC project with different parameters and options without recompilation.

Problem definition
The problem formulation in GRAMPC follows a generic structure.The essential steps for a problem definition are illustrated for the following MPC example min with ∆x = x − x des , ∆u = u − u des , and the weights The dynamics (39b) are a simplified linear model of a single axis of a ball-on-plate system [51] that is also included in the testbench of GRAMPC (see Section 5).
The horizon length and the sampling time are set to T = 0.3 s and ∆t = 10 ms, respectively.The problem formulation (39) is provided in the user template probfct.c.The following listing gives an expression of the function structure and the problem implementation for the ball-on-plate example (39).The naming of the functions follows the nomenclature of the general OCP formulation (1), except for the function ocp_dim, which defines the dimensions of the optimization problem.Problem specific parameters can be used inside the single functions via the pointer userparam.For convenience, the desired setpoint (xdes,udes) to be stabilized by the MPC is provided to the cost functions separately and therefore does not need to be passed via userparam.
In addition to the single OCP functions, GRAMPC requires the derivatives w.r.t.state x, control u, and if applicable w.r.t. the optimization parameters p and end time T , in order to evaluate the optimality conditions in Algorithm 2. Jacobians that occur in multiplied form, see e.g. ( ∂f ∂x T λ in the adjoint dynamics (22), have to be provided in this representation.This helps to avoid unnecessary zero multiplications in case of sparse Jacobians.The following listing shows an excerpt of the corresponding gradient functions.

Calling procedure and options
GRAMPC provides several key functions that are required for initializing and calling the MPC solver.As shown in Figure 2, there exist mex wrapper functions that ensure that the interface for interacting with GRAMPC is largely the same under C/C++ and Matlab.
The following listing gives an idea on how to initialize GRAMPC and how to run a simple MPC loop for the ball-on-plate example under Matlab.The listing also shows some of the algorithmic settings, e.g. the number of discretization points Nhor for the horizon [0, T ], the maximum number of iterations (i max , j max ) for Algorithm 1 and 2, or the choice of integration scheme for solving the canonical equations ( 22), (26).Currently implemented integration methods are (modified) Euler, Heun, 4th order Runge-Kutta as well as the solver RODAS [27] that implements a 4th order Rosenbrock method for solving semi-implicit differential-algebraic equations with possibly singular and sparse mass matrix M , cf. the problem definition in (1).The Euler and Heun methods use a fixed step size depending on the number of discretization points (Nhor), whereas RODAS and Runge-Kutta use adaptive step size control.The choice of integrator therefore has significant impact on the computation time and allows one to optimize the algorithm in terms of accuracy and computational efficiency.Further options not shown in the listing are e.g. the settings (xScale, xOffset) and (uScale,uOffset) in order to scale the input and state variables of the optimization problem.
The initialization and calling procedure for GRAMPC is largely the same under C/C++ and Matlab.One exception is the handling of user parameters in userparam.Under C, userparam can be an arbitrary structure, whereas the Matlab interface restricts userparam to be of type array (of arbitrary length).
Moreover, the Matlab call of grampc_run_Cmex returns an updated copy of the grampc structure as output argument in order to comply with the Matlab policy to not manipulate input arguments.

Performance evaluation
The intention of this section is to evaluate the performance of GRAMPC under realistic conditions and for meaningful problem settings.To this end, an MPC testbench suite is defined to evaluate the computational performance in comparison to other state-of-the-art MPC solvers and to demonstrate the portability of GRAMPC to real-time and embedded hardware.The remainder of this section demonstrates the handling of typical problems from the field of MPC, moving horizon estimation and optimal control.

General MPC evaluation
The MPC performance of GRAMPC is evaluated for a testbench that covers a wide range of meaningful MPC applications.For the sake of comparison, the two MPC toolboxes ACADO and VIATOC are included in the evaluation, although it is not the intention of this section to rigorously rate the performance against other solvers, as such a comparison is difficult to carry out objectively.The evaluation should rather give a general impression about the performance and properties of GRAMPC.In addition, implementation details are presented for running the MPC testbench examples with GRAMPC on dSpace and ECU level.

Benchmarks
Table 1 gives an overview of the considered MPC benchmark problems in terms of the system dimension, the type of constraints (control/state/general nonlinear constraints), the dynamics (linear/nonlinear and explicit/semi-implicit) as well as the respective references.The MPC examples are evaluated with GRAMPC as well as with ACADO Toolkit [30] and VIATOC [33].
The testbench includes three linear problems (mass-spring-damper, helicopter, ball-on-plate) and one semi-implicit reactor example, where the mass matrix M in the semi-implicit form (1b) is computed from the original partial differential equation (PDE) using finite elements, also see Section 5.2.3.The nonlinear chain problem is a scalable MPC benchmark with m masses.Three further MPC examples are defined with nonlinear constraints.The permanent magnet synchronous machine (PMSM) possesses spherical voltage and current constraints in dq-coordinates, whereas the crane example with three degrees of freedom (DOF) and the vehicle problem include a nonlinear constraint to simulate a geometric restriction that must be bypassed (also see Section 5.2.1 for the crane example).Three of the problems (PMSM, 2D-crane, vehicle) are not evaluated with VIATOC, as nonlinear constraints cannot be handled by VIATOC at this stage.
For the GRAMPC implementation, most options are set to their default values.The only adapted parameters concern the horizon length T , the number of supporting points for the integration scheme and the integrator itself as well as the number of augmented Lagrangian and gradient iterations, i max and j max , respectively.Default settings are used for the multiplier and penalty updates for the sake of consistency, see Algorithm 1 as well as Section 3.5.Note, however, that the performance and computation time of GRAMPC can be further optimized by tuning the parameters related to the penalty update to a specific problem.All benchmark problems listed in Table 1 are available as example implementations in GRAMPC.ACADO and VIATOC are individually tuned for each MPC problem by varying the number of shooting intervals and iterations in order to either achieve minimum computation time (setting "speed") or optimal performance in terms of minimal cost at reasonable computational load (setting "optimal").The solution of the quadratic programs of ACADO was done with qpOASES [18].
The single MPC projects are integrated in a closed-loop simulation environment with a fourth-order Runge-Kutta integrator with adaptive step size to ensure an accurate system integration regardless of the integration schemes used internally by the MPC toolboxes.The evaluation was carried out on a Windows 10 machine with Intel(R) Core(TM) i5-5300U CPU running at 2.3 GHz using the Microsoft Visual C++ 2013 Professional (C) compiler.Each simulation was run multiple times to obtain a reliable average computation time.

Evaluation results
Table 2 shows the evaluation results for the benchmark problems in terms of computation time and cost value integrated over the whole time interval of the simulation scenario.The cost values are normalized to the best one of each benchmark problem.The results for ACADO and VIATOC are listed for the settings "speed" and "optimal", as mentioned above.The depicted computation times are the mean computation times, averaged over the complete time horizon of the simulation scenario.The best values for computation time and performance (minimal cost) for each benchmark problem are highlighted in bold font.
The linear MPC problems (mass-spring-damper, helicopter, ball-on-plate) with quadratic cost functions can be tackled well by VIATOC and ACADO due to their underlying linearization techniques.The PDE reactor problem contains a stiff system dynamics in semi-implicit form.ACADO can handle such problems well using its efficient integration schemes, whereas VIATOC relies on fixed step size integrators and therefore requires a relatively large amount of discretization points.While GRAMPC achieves the fastest computation time, the cost value of both ACADO settings as well as the VIATOC optimal setting is lower.A similar cost function, however, can be achieved by GRAMPC when deviating from the default parameters.
In case of the state constrained 2D-crane problem, the overall cost is higher for ACADO than for GRAMPC.This appears to be due to the fact that almost over the complete simulation time a nonlinear constraint of a geometric object to be bypassed is active and ACADO does not reach the new setpoint in the given time.A closer look at this problem is taken in Section 5.2.1.
The CSTR reactor example possesses state and control variables in different orders of magnitude and therefore benefits from scaling.Since GRAMPC supports scaling natively, the computation time is faster than for VIATOC, where the scal- ing would have to be done manually.Due to the Hessian approximation used by ACADO, it is far less affected by the different scales in the states and controls.A large difference in the cost values occurs for the VTOL example (Vertical Take-Off and Landing Aircraft).Due to the nonlinear dynamics and the corresponding coupling of the control variables, it seems that the gradient method underlying the minimization steps of GRAMPC is more accurate and robust when starting in an equilibrium position than the iterative linearization steps undertaken by ACADO and VIATOC.
The scaling behavior of the MPC schemes w.r.t. the problem dimension is investigated for the nonlinear chain in Table 2. Four different numbers of masses are considered, corresponding to 21-57 state variables and three controls.Although the algorithmic complexity of the augmented Lagrangian/gradient projection algorithm of GRAMPC grows linearly with the state dimension, this is not exactly the case for the nonlinear chain, as the stiffness of the dynamics increases for a larger number of masses, which leads to smaller step sizes of the adaptive Runge-Kutta integrator that was used in GRAMPC for this problem.ACADO shows a more significant increase in computation time for larger values of m, which was to be expected in view of the SQP algorithm underlying ACADO.The computation time for VIATOC is worse for this example, since only fixed step size integrators are available in the current release, which requires to increase the number of discretization points manually.Figure 3 shows a logarithmic plot of the computation time for all three MPC solvers plotted over the number of masses of the nonlinear chain.
The computation times shown in Table 2 are average values and therefore give no direct insight into the real-time feasibility of the MPC solvers and the variation of the computational load over the single sampling steps.To this end, Figure 4 shows accumulation plots of the computation time per MPC step for three selected problems of the testbench.The computation times were evaluated after 30 GRAMPC ACADO (speed) ACADO (optimal) VIATOC (speed) VIATOC (optimal) Fig. 3: Computation time for the nonlinear chain example (also see Table 2).successive runs to obtain reliable results.The plots show that the computation time of GRAMPC is almost constant for each MPC iteration, which is important for embedded control applications and to give tight upper bounds on the computation time for real-time guarantees.ACADO and VIATOC show a larger variation of the computation time over the iterations, which is mainly due to the active set strategy that both solvers follow and the varying number of QP iterations in each real-time iteration of ACADO, c.f. [30].
In conclusion, it can be said that GRAMPC has overall fast and real-time feasible computation times for all benchmark problems, in particular for nonlinear systems and in connection with (nonlinear) constraints.Furthermore, GRAMPC scales well with increasing system dimension.

Embedded realization
In addition to the general MPC evaluation, this section evaluates the computation time and memory requirements of GRAMPC for the benchmark problems on real-time and embedded hardware.GRAMPC was implemented on a dSpace MicroLabbox (DS1202) with a 2 GHz Freescale QolQ processor as well as on the microntroller RH850/P1M from Renesas with a CPU frequency of 160 MHz, 2 MB program flash and 128 kB RAM.This processor is typically used in electronic control units (ECU) in the automotive industry.The GRAMPC implementation on this microcontroller therefore can be seen as a prototypical ECU realization.As it is commonly done in embedded systems, GRAMPC was implemented using single floating point precision on both systems due to the floating point units of the processors.
Table 3 lists the computation time and RAM memory footprint of GRAMPC on both hardware platforms for the testbench problems in Table 1 and 2. The settings of GRAMPC are the same as in the previous section, except for the floating point precision.Due to the compilation size limit of the ECU compiler (< 10 kB), the nonlinear chain examples as well as the PDE reactor could not be compiled on the ECU.
The computation times on the dSpace hardware are below the sampling time for all example problems.The same holds for the ECU implementation, except for the 2D-crane, the PMSM, and the VTOL example.However, as mentioned before, tuning of the algorithm can further reduce the runtime, as most of the multiplier and penalty update parameters are taken as default.Note that there is no constant scaling factor between the computation times on dSpace and ECU level, which is probably due to the different realization of the math functions by the respective floating point unit / compiler1 on the different hardware.
The required memory is below 9 kB for all examples, except for the nonlinear chain and the PDE reactor, which is less than 7 % of the available RAM on the considered ECU.Although the nonlinear chain and the PDE reactor could not be compiled on the ECU as mentioned above, the memory usage as well as the computation time increase only linearly with the size of the system (using the same GRAMPC settings).Overall, the computational speed and the small memory footprint demonstrate the applicability of GRAMPC for embedded systems.
Fig. 5: Schematic representation of the overhead crane (left) and simulated crane movement from the initial state to the desired setpoint (right).

Application examples
This section discusses some application examples, including a more detailed view on two selected problems from the testbench collection (state constrained and semiimplicit problem), a shrinking horizon MPC application, an equality constrained OCP as well as a moving horizon estimation problem.

Nonlinear constrained model predictive control
The 2D-crane example in Table 1 and 2 is a particularly challenging one, as it is subject to a nonlinear constraint that models the collision avoidance of an object or obstacle.A schematic representation of the overhead crane is given in Figure 5.The crane has three degrees-of-freedom and the nonlinear dynamics read as [35] with the gravitational constant g.The system state x = [s C , ṡC , s R , ṡR , φ, φ] T ∈ R 6 comprises the cart position s C , the rope length s R , the angular deflection φ and the corresponding velocities.The two controls u ∈ R 2 are the cart acceleration u 1 and the rope acceleration u 2 , respectively.The cost functional (2a) consists of the integral part which penalizes the deviation from the desired setpoints x des ∈ R 6 and u des ∈ R 2 respectively.The weight matrices are set to Q = diag(1, 2, 2, 1, 1, 4) and R = diag(0.05,0.05) (omitting units).The controls and angular velocity are subject to the box constraints |u i | ≤ 2 m s −2 , i ∈ 1, 2 and | φ| ≤ 0.3 rad s −1 , i ∈ 1, 2. In addition, the nonlinear inequality constraint is imposed, which represents a geometric security constraint, e.g. for trucks, over which the load has to be lifted, see Figure 6 (right).The prediction horizon and sampling time for the crane problem are set to T = 2 s and ∆t = 2 ms, respectively.The number of augmented Lagrangian steps and inner gradient iterations of GRAMPC are set to (i max , j max ) = (1, 2).These settings correspond to the computational results in Table 2 and 3 Fig. 6: MPC trajectories for the 3DOF crane.
The right part of Figure 5 illustrates the movement of the overhead crane from the initial state x 0 = [−2 m, 0, 2 m, 0, 0, 0] T to the desired setpoint x des = [2 m, 0, 2 m, 0, 0, 0] T .Figure 6 shows the corresponding trajectories of the states x(t) and controls u(t) as well as the nonlinear constraint (41) plotted as time function h(x(t)).This transition problem is quite challenging, since the nonlinear constraint ( 41) is active for more than half of the simulation time.One can slightly see an initial violation of the constraint h(x) of approximately 1 mm, which should be negligible in practical applications.Nevertheless, one can satisfy the constraint to a higher accuracy by increasing the number of iterations (i max , j max ), in particular of the augmented Lagrangian iterations.

MPC on shrinking horizon
"Classical" MPC with a constant horizon length typically acts as an asymptotic controller in the sense that a desired setpoint is only reached asymptotically.MPC on a shrinking horizon instead reduces the horizon time T in each sampling step in order to reach the desired setpoint in finite time.In particular, if the desired setpoint is incorporated into a terminal constraint and the prediction horizon T is optimized in each MPC step, then T will be automatically reduced over the runtime due to the principle of optimality.
with the state x = [x 1 , x 2 ] T and control u subject to the box constraint (42d).The weight r > 0 in the cost functional (42a) allows a trade-off between energy optimality and time optimality of the MPC.The desired setpoint x des is added as terminal constraint (42e), i.e. g T (x(T )) := x(T ) − x des = 0 in view of (1c), and the prediction horizon T is treated as optimization variable in addition to the control u(τ ), τ ∈ [0, T ].For the simulations, the weight in the cost functional is set to r = 0.01 and the initial value of the horizon length is chosen as T = 6.The iteration numbers for GRAMPC are set to (i max , j max ) = (1, 2) in conjunction with a sampling time of ∆t = 0.001 in order to resemble a real-time implementation.Figure 7 shows the simulation results for the double integrator problem with the desired setpoint x des = 0 and the initial state x(0) = [−1, −1] T .Obviously, the state trajectories reach the origin in finite time corresponding to the anticipated behavior of the shrinking horizon MPC scheme.The lower right plot of Figure 7 shows the temporal evolution of the horizon length T over the runtime t.The initial end time of T = 6 is marked as a red circle.In the first MPC steps, the optimization quickly decreases the end time to approximately T = 3.5.In this short time in-terval, GRAMPC produces a suboptimal solution due to the strict limitation of the iterations (i max , j max ).Afterwards, however, the prediction horizon declines linearly, which concurs with the principle of optimality and shows the optimality of the computed trajectories after the initialization phase.In the simulated example, this knowledge is incorporated in the MPC implementation by substracting the sampling time ∆t from the last solution of T for warm-starting the next MPC step.The simulation is stopped as soon as the horizon length reaches its minimum value T min = ∆t = 0.1.

Semi-implicit problems
The system formulations that can be handled with GRAMPC include DAE systems with singular mass matrix M as well as general semi-implicit systems.An application example is the discretization of spatially distributed systems by means of finite elements.This is illustrated for a quasi-linear diffusion-convection-reaction system, which is also implemented in the testbench (PDE reactor example).The thermal behavior of the reactor is described on the one-dimensional spatial domain z = (0, 1) using the PDE formulation [59] with the boundary and initial conditions for the temperature θ = θ(z, t).The process is controlled by the boundary control u = u(t).Diffusive and convective processes of the reactor are modeled by the nonlinear heat equation (43a) with the parameters q 1 = 2, q 2 = −0.05,and ν = 1, respectively.Reaction effects are included using the parameters r 0 = 1 and r 1 = 0.2.The Neumann boundary condition (43b), the Robin boundary condition (43c), and the initial condition (43) complete the mathematical description of the system dynamics.Both spatial and time domain are normalized for the sake of simplicity.
A more detailed description of the system dynamics can be found in [59].
The PDE system (43) is approximated by an ODE system of the form (20a) by applying a finite element discretization technique [62], whereby the new state variables x ∈ R N x approximate the temperature θ on the discretized spatial domain z with N x spatial grid points.The finite element discretization eventually leads to a finite-dimensional system dynamics of the form M ẋ = f (x, u) with the mass matrix M ∈ R N x ×N x and the nonlinear system function f (x, u).In particular, N x = 11 grid points are chosen for the GRAMPC simulation of the reactor (43).The upper right plot of Figure 8 shows the sparsity structure of the mass matrix M ∈ R 11×11 .
The control task for the reactor is the stabilization of a stationary profile x des which is accounted for in the quadratic MPC cost functional The desired setpoint (x des , u des ) as well as the initial values (x 0 , u 0 ) are numerically determined from the stationary differential equation with the corresponding boundary conditions The prediction horizon and sampling time of the MPC scheme are set to T = 0.2 and ∆t = 0.005.The number of iterations are limited by (i max , j max ) = (1, 2).The box constraints for the control are chosen as |u(t)| ≤ 2. The numerical integration in GRAMPC is carried out using the solver RO-DAS [27].The sparse numerics of RODAS allow one to cope with the banded structure of the matrices in a computationally efficient manner.Figure 8 shows the setpoint transition from the initial temperature profile x 0 to the desired temperature x des = [2.00,1.99, 1.97, 1.93, 1.88, 1.81, 1.73, 1.63, 1.51, 1.38, 1.23] T and desired control u des = −1.57.

OCP with equality constraints
An illustrative example of an optimal control problem with equality constraints is a dual arm robot with closed kinematics, e.g. to handle or carry work pieces with both arms.For simplicity, a planar dual arm robot with the joint angles (x 1 , x 2 , x 3 ) and (x 4 , x 5 , x 6 ) for the left and right arm is considered.The dynamics are given by simple integrators The closed kinematic chain is enforced by the equality constraint A point-to-point motion from T is considered as control task, which is formulated as the optimal control problem min with the fixed end time T = 10 s and the control constraints −u min = u max = [1, 1, 1, 1, 1, 1] T s −1 , which limit the angular speeds of the robot arms.The cost functional minimizes the squared joint velocities with R = I 6 .
Table 4 shows the computation results of GRAMPC for solving OCP (49) with increasingly restrictive values of the gradient tolerance ε rel,c and constraint tolerance ε g that are used for checking convergence of Algorithm 1 and 2. The required computation time t CPU as well as the average number of (inner) gradient iterations and the number of (outer) augmented Lagrangian iterations are shown in Table 4.The successive reduction of the tolerances ε rel,c and ε g highlights that the augmented Lagrangian framework is able to quickly compute a solution with moderate accuracy.When further tightening the tolerances, the computation time as well as the required iterations increase clearly.This is to be expected as augmented Lagrangian and gradient algorithms are linearly convergent opposed to super-linear or quadratic convergence of, e.g., SQP or interior point methods.However, as the main application of GRAMPC is model predictive control, the The simulated scenario in Figure 10 consists of alternating setpoint changes between two stationary points.The two measured temperatures are subject to a Gaussian measurement noise with a standard deviation of 4 • C. The initial guess for the state vector x of the MHE differs from the true initial values by 100 kmol m −3 for both concentrations c A and c B and by 5 • C, respectively −7 • C, for the reactor and cooling temperature.This initial disturbance vanishes within a few iterations and the MHE tracks the ground truth (i.e. the simulated state values) closely, with an average error of δ x = [7.12kmol m −3 , 6.24 kmol m −3 , 0.10 • C, 0.09 • C] T .This corresponds to a relative error of less than 0.1 % for each state variable, when normalized to the respective maximum value.One iteration of the MHE requires a computation time of t CPU = 11 µs to calculate a new estimate of the state vector x and therefore about one third of the time necessary for one MPC iteration.

Conclusions
The augmented Lagrangian algorithm presented in this paper is implemented in the nonlinear model predictive control (MPC) toolbox GRAMPC and extends its original version that was published in 2014 in several significant aspects.The system class that can be handled by GRAMPC are general nonlinear systems de-scribed by explicit or semi-implicit differential equations or differential-algebraic equations (DAE) of index 1.Besides input constraints, the algorithm accounts for nonlinear state and/or input dependent equality and inequality constraints as well as for unknown parameters and a possibly free end time as further optimization variables, which is relevant, for instance, for moving horizon estimation or MPC on a shrinking horizon.The computational efficiency of GRAMPC is illustrated for a testbench of representative MPC problems and in comparison with two state-of-the-art nonlinear MPC toolboxes.In particular, the augmented Lagrangian algorithm implemented in GRAMPC is tailored to embedded MPC with very low memory requirements.This is demonstrated in terms of runtime results on dSPACE and ECU hardware that is typically used in automotive applications.GRAMPC is available at http://sourceforge.net/projects/grampc and is licensed under the GNU Lesser General Public License (version 3), which is suitable for both academic and industrial/commercial purposes.

A Transformation of inequality to equality constraints
This appendix describes the transformation of the inequality constraints (1d) to the equality constraints (10) in more detail.A standard approach of augmented Lagrangian methods is to introduce slack variables v ≥ 0 and v T ≥ 0 in order to add (1d) to the existing equality constraints (1c) according to ĝT (x, p, T, v T ) = g T (x, p, T ) h T (x, p, T ) + v T = 0 , ĝ(x, u, p, t, v) = g(x, u, p, t) h(x, u, p, t) The set of constraints (52) The minimization of the cost functional (55a) with respect to v and v T can be carried out explicitly.In case of v T , the minimization of (55a) reduces to the strictly convex, quadratic problem for which the solution follows from the stationarity condition and projection if the constraint v T ≥ 0 is violated The minimization of (55a) w.r.
Since v only occurs in the integral and is not influenced by the dynamics (55b), the minimization of (55a) w.r.t.v = v(t) can be carried out pointwise in time and therefore reduces to a convex, quadratic problem similar to (56) with the pointwise solution v = max 0, −C −1 h µ h − h(x, u, p, t) .

/Fig. 2 :
Fig. 2: Interfacing of GRAMPC to C (grey) and Matlab (white).Each GRAMPC function written in plain C has a corresponding Cmex function.
% user parameters userparam = [100 ,10 ,1 ,100 ,10 , -0.2 ,0.01 , -0.1 ,0.1] % initializatio n grampc = gr a m p c _ i n i t _ C m e x ( userparam ); % set parameters grampc = g r a m p c _ s e t p a r a m _ C m e x ( grampc , ' x0 ' ,[0.1;0.01]);grampc = g r a m p c _ s e t p a r a m _ C m e x ( grampc , ' xdes ' ,[ -0.2;0]); grampc = g r a m p c _ s e t p a r a m _ C m e x ( grampc , ' u0 ' ,0); grampc = g r a m p c _ s e t p a r a m _ C m e x ( grampc , ' udes ' ,0); grampc = g r a m p c _ s e t p a r a m _ C m e x ( grampc , ' Thor ' ,0.3); grampc = g r a m p c _ s e t p a r a m _ C m e x ( grampc , ' dt ' ,0.01); grampc = g r a m p c _ s e t p a r a m _ C m e x ( grampc , ' t0 ' ,0); grampc = g r a m p c _ s e t p a r a m _ C m e x ( grampc , ' umin ' ,0.0524); grampc = g r a m p c _ s e t p a r a m _ C m e x ( grampc , ' umax ' , -0.0524); % set options grampc = g r a m p c _ s e t o p t _ C m e x ( grampc , ' Nhor ' ,20); grampc = g r a m p c _ s e t o p t _ C m e x ( grampc , ' MaxGradIter ' ,2); grampc = g r a m p c _ s e t o p t _ C m e x ( grampc , ' MaxMultIter ' ,3); grampc = g r a m p c _ s e t o p t _ C m e x ( grampc , ' InequalityConstraints ' , ' on '); grampc = g r a m p c _ s e t o p t _ C m e x ( grampc , ' Integrator ' , ' heun '); % MPC loop for i = 1: iMPC % run GRAMPC grampc = g ra mp c _r un _C m ex ( grampc ); ... % set new initial state grampc = g r a m p c _ s e t p a r a m _ C m e x ( grampc , ' x0 ' , grampc .sol .xnext ); ... end ...

Fig. 4 :
Fig. 4: Accumulation of the computation times for three different examples (averaged over 30 runs).

Table 1 :
Overview of MPC benchmark problems.

Table 2 :
Evaluation results for the benchmark problems in Table1with overall integrated cost J int and computation time t CPU in milliseconds.

Table 3 :
Computation time and memory usage for the embedded realization of GRAMPC on dSpace hardware (DS1202) and ECU level (Renesas RH850/P1M) with single floating point precision.
t. the slack variables v = v(t) corresponds to