Novel solution strategies for multiparametric nonlinear optimization problems with convex objective function and linear constraints

This paper expands the Multiparametric Quadratic Programming (mp-QP) framework presented in [10] to the more general Multiparametric Nonlinear Programming (mp-NLP) case. First, the vector of parameters in mp-NLP problems is recast so that a unique transformed parameter is implicitly assigned to each of the inequality constraints. Maps of critical regions in this transformed space of parameters feature a set of 1-dimensional parametric edges (two per inequality constraint), which then greatly facilitate solution calculation. In the mp-NLP case, however, parametric edges deﬁne nonlinear semi-inﬁnite lines; this requires an adaptation to the mp-QP algorithm (deals with linear parametric edges only), to enable a suitable calculation path to the more general nonlinear case. Three routes are proposed to mp-NLPs: the ﬁrst route delivers solutions in compact form (same format as in mp-QP) using a single reference point per edge; the second route delivers explicit solutions using a hybrid approach for critical region construction, where all active sets not detected in the parameters space are excluded from the solution (equivalent to ﬁrst route concerning accuracy); the third route builds on the initial explicit solution and further partitions the parameters space until all solution fragments satisfy an error check. Five algorithms were coded for these routes, and tested in a large range of mp-NLP problems. These strategies enable sig-Diogo


Introduction
Multiparametric Programming deals with a special class of optimization problems, where a vector of parameters is defined independently of the optimization variables and bounded in a real space of parameters.These parameters are typically posed as a set of right-hand side dependencies on the definition of the inequality constraints, enabling the contraction and expansion of the feasible space.These problems are inherently infinite in size, and their solutions are commonly delivered as a set of partitions of the parameters space (the critical regions) and their optimizer functions [12].
In Muliparametric Linear and Quadratic Programming (mp-LP and mp-QP, respectively), optimizer functions are obtained exactly from the Karush-Kuhn-Tucker (KKT) conditions for all active sets [6], [5].The matching critical regions are then obtained via a set of additional optimality and feasibility constraints imposed on the solution of the KKT conditions.This solution strategy is generally infeasible in Multiparametric Nonlinear Programming (mp-NLP): the KKT conditions in mp-NLP problems typically define a nonlinear system of equations, which cannot be solved explicitly for the independent vector of parameters.As a result, while in mp-LP and mp-QP the solution paradigm is based on the exact calculation of optimizer functions and critical regions per active set, in mp-NLP, approximation-based strategies rely on some pre-defined parameters space partitioning strategy.
In [4], a multiparametric outer approximation (mp-OA) algorithm is proposed, which relies on local linearizations of the objective function to deliver the solutions of mp-NLPs via mp-LP.In [8], a multiparametric quadratic approximation (mp-QA) algorithm is presented, iterating between the solutions of NLP and mp-QP problems.In [1], an approximate multiparametric (AM) algorithm is proposed, where the parameters space is divided sequentially via a set of convex hulls, and optimizer functions are obtained from interpolation until a pre-defined tolerance is satisfied.A geometric vertex search (GVS) algorithm is presented in [9], where critical regions are also delivered as convex hulls via a careful selection of vertices to improve solution accuracy and compactness.A review of the state-of-the-art mp-NLP algorithms is presented in [3], where an improvement to the mp-QA algorithm is also discussed.
The primary goal of mp-NLP algorithms is to deliver optimizer functions such that for all combinations of parameters in the parameters space, the estimated optimizers match as closely as possible the exact optimizers (within a pre-defined tolerance).All algorithms above achieve this with distinct levels of success: this depends mainly on the strategy used to partition the parameters space, which may favor or hinder the calculation of solutions compactly and efficiently.It is noted in [9] that the critical regions obtained from mp-NLP algorithms frequently bear little or no resemblance with the actual optimal active sets maps.These maps are often very complex and feature critical regions where no noticeable patterns can be identified, which makes the task of delivering the solutions of mp-NLP problems with the lowest complexity possible and preserving the actual optimal active set maps a challenging objective.
A new solution strategy for convex mp-QP was proposed in [10], which is particularly relevant to address the problem of how to partition the parameters space in mp-NLP problems: given an mp-QP problem including p inequality constraints, it is shown that the full map of critical regions may be described in a concise form via a set of 2p vectors of directions in IR p , rather than up to a total of 2 p sets of parameters space bounds defining the critical regions for as many active sets.This is achieved after transforming the original vector of parameters, which renders highly structured optimal active sets maps in the new space of parameters.These principles are also applicable to mp-NLP, and in this work, we explore how to adapt them from the mp-QP to the mp-NLP case.
The rest of the paper is organized as follows: Section 2 motivates the need for more efficient mp-NLP algorithms and highlights the methodology used in this work.In Section 3, a formal framework and five new algorithms are proposed to solve mp-NLP problems.In Section 4, two mp-NLP problems are solved and a comparison with the state-of-the-art mp-NLP algorithms is presented.The proposed methodology and results are discussed in Section 5.

Motivating example
Consider the following mp-NLP problem: where x and θ are the vectors of optimization variables and parameters, respectively.For any x ∈ IR 2 , the objective function's Hessian is a positive diagonal matrix, from where it follows that Equation 1 is a convex mp-NLP.The application of the AM algorithm to this problem is illustrated in Figure 1.
Two regions (CR 1 and CR 2 ) are created initially using the corners of Θ (Figure 1 -A), and optimizer functions are obtained via interpolation of the corresponding optimizers and parameters.The accuracy of optimizer functions is tested, and critical regions are partitioned sequentially as many times as

AM exact
Fig. 1 Critical regions map for motivating example via AM: (A) initial set of convex hulls; (B) final set of convex hulls.The exact optimal active set map is defined via the black dotted lines.
necessary until all optimizer functions for all partitions satisfy a pre-defined tolerance (B).Note that the final map includes 10 regions and bears little resemblance to the actual optimal active sets map.This partitioning scheme depends mainly on the corners of Θ and delivers a somewhat artificial map of critical regions.While the AM algorithm ensures optimizer functions for all critical regions are within a pre-defined tolerance, this strategy may require a number of partitions substantially larger than the total number of active sets: these partitions may span across several of the actual regions per optimal active set; in turn, in these regions, optimizer sensitivities may be significantly different which requires the creation of many partitions to achieve optimizer accuracy and leads unavoidably to algorithm and solution complexity.
It is reported in [3] that the improved mp-QA algorithm solves mp-NLPs more efficiently as illustrated via the solution of a benchmark problem.Yet, the GVS algorithm is the only mp-NLP algorithm known to the authors focusing specifically on building a more natural set of critical regions: it aims to deliver accurate optimal active set maps, via a computationally expensive partitioning scheme (applicable only to a subset of mp-NLP problems).Overall, and while the state-of-the-art mp-NLP algorithms do deliver accurate laws for optimizer calculation, we argue that the calculation of solutions minimizing algorithm complexity and more consistent with the exact optimal active set maps remains an open research subject.
Using the framework presented in [10], this new partitioning strategy may also be adapted to the mp-NLP case.A brief illustration is presented next; first, Equation 1 is recast using z = F θ:   −0.990 0.099 0.990 −0.099 −0.099 −0.990 0.099 0.990 where z is a vector of transformed parameters and Z the corresponding space of parameters.The exact map of optimal active sets in Z is depicted in Figure 2 (A).In mp-QP, after recasting z = F θ, it suffices to calculate the objective function's global optimum (x * ), and the (constant) sensitivities of all singleconstraint active sets, from where the full solution is obtained via a set of trivial algebraic operations.The same principles apply to mp-NLP; the main difference resides in the fact that the sensitivities of single-constraint active sets are not constant as highlighted in Figure 3 (A).To circumvent this, a single representative optimizer is used per each of the single-constraint active sets via the corresponding KKT conditions with z < z * (x * ,1 and x * ,2 ), as depicted in Figure 3 (B).Constant estimations for all gradients (V x ) are obtained trivially from the reference optimizers.(B) Fig. 3 Optimizers for single-constraint active sets in motivating example: (A) exact calculation via the KKT conditions using multiple z < z * ; (B) representative (constant) gradients using a single optimizer per active set (x * ,1 and x * ,2 ) and the global optimum (x * ).
The estimated 1-dimensional edges between all critical regions are then obtained from V z,a = AV x .Their calculation is depicted in Figure 2 (B).Note that the critical regions obtained this way match very closely the optimal active set maps since this partitioning scheme explores very efficiently the topology of critical regions in Z.It also contributes to capturing optimizer sensitivities more efficiently, thus offering a promising path for mp-NLP.Section 3 presents a formal framework.

New solution strategies for convex mp-NLP
In this work, we address convex mp-NLP problems of the following class: where x and θ are the vectors of optimization variables and parameters, respectively, and f (x) is a convex nonlinear function.A ∈ IR (p×n) , b ∈ IR (p×1) , F ∈ IR (p×m) , P A ∈ IR (r×m) and P b ∈ IR (r×1) are constant real matrices and vectors.Constant integers m, n, p, and r denote the number of parameters, optimization variables, feasible space's inequality constraints, and parameters space's bounds, respectively.Active sets are referred to in this work via two alternative notations: (i) as a collection of abbreviated constraints of the form {c j }, where j is any collection of constraint indexes between 1 and p, or (ii) as a binary vector y of size p, where y j = 0 or 1 denotes the j th constraint is inactive/active, respectively.The empty active set or y empty = [0, 0, ..., 0] ({}), and the full active set or y f ull = [1, 1, ..., 1] ({c 1 , c 2 , . . ., c p }) are defined as particular cases of this notation.
Equation 3 is recast in two steps; first, a new vector of transformed parameters z = F θ is defined: In those mp-NLP problems where F is a square invertible matrix, θ = F −1 z and an explicit definition of Z via a set of halfspaces is possible (excluding θ from Equation 4).We refer to Equation 4 as the transformed mp-NLP.In a second recasting step, Z is defined unbounded: We refer to Equation 5 as the expanded mp-NLP.It follows directly from the discussion in Sections 3.1.1and 3.1.2in [10] that the expanded mp-NLP and the recast mp-QP problems share the same topology properties.Namely, the solutions of both problems include: (i) a unique optimizer vertex, (ii) p optimizer edges, (iii) a unique parametric vertex, and (iv) 2p parametric edges.In mp-NLP, the optimizer vertex is the global optimum of f (x) and the optimizer edges include the full set of optimizers (x opt ) for all single-constraint active sets defining 1-dimensional semi-infinite lines in X; the parametric vertex and the parametric edges are obtained from the corresponding optimizers using z = Ax opt − b as illustrated in Figures 2 and 3.
In mp-NLP, optimizer edges generally define nonlinear semi-infinite lines.Optimizer sensitivities are not constant and this is the most critical subject towards the adaptation of the newly developed mp-QP paradigm to the mp-NLP case.Building on the initial analysis proposed in Section 2, three new solution strategies are presented in Sections 3.1 -3.3.These enable the calculation of the solutions for the expanded and transformed mp-NLPs with incremental accuracy as schematically depicted in Figure 4. Five algorithms deliver this functionality.These were coded in Python [14], and supported on the numpy [7], SciPy [15] and pyomo [2] libraries.A high-level discussion is presented in this section, including the steps of all algorithms in the form of pseudo-codes; the Python codes for all algorithms are available in file algorithms mp-NLP (to include later in Github).

Compact solution
This concept was proposed in [10] for mp-QP.Taking the optimizer vertex (x * ), the parametric vertex (z * ), the gradients of optimizer edges (V x ), and the gradients of parametric edges (V z,i -inactive constraints, and V z,a -active constraints), all critical regions (Equation 6), and optimizer functions (Equation 7), are expressed exactly in compact form as follows: where l is a nonnegative vector of multipliers denoting the extents of feasible space expansion/contraction beyond z * via each of the p inequality constraints, and x opt represents the vector of optimizers.The compact solution (CS) format is also directly applicable to the expanded and transformed mp-NLP problems given their shared topology properties.The calculation of all vertices and gradients is adapted from [10] to the mp-NLP case as summarized in Algorithm 1, where obtaining estimates for V x in Step 3 requires the most significant update.

Algorithm 1
Step 1: Calculate the global optimizer of f (x), x * , via the KKT conditions (unconstrained problem).
Step 3A.3: Estimate the gradients of the corresponding optimizer edge as v x,j = x * ,j − x * .
Step 3B: Step 4: Define V z,i as the identity matrix of size p.
Step 5: Calculate V z,a = AV x .
Step 6: Using x * , z * and matrices V x , V z,i , V z,a , write the CS via Equations 6 and 7.
(a) For all j = 1, . . ., p, if z min j ≥ z * j , there is no θ ∈ Θ where inequality constraint c j excludes x * from the feasible space.In this case, set z min j = z * j − ∆z, where ∆z > 0 is a user-defined parameter. (b) δ is a user-defined vector of slacks that broadens the scope of the basic and refined explicit solutions (additional details in Section 3.2.5). (c) This design favors the selection of representative points for all edges in the context of Θ, thus promoting solution accuracy specifically within its bounds.
The Linear Programming (LP) problem below delivers z min j = (e (j) ) T F θ in Step 3A. 1, where e (j) is a unitary vector such that e (j) k = 0 for all k = 1, . . ., p, except k = j, where e (j) Formally, the CS covers all z ∈ IR p ; however, since the exact solutions of mp-NLPs are generally nonlinear, and critical regions and optimizer functions delivered via the CS are affine functions of l, this results unavoidably in a deviation between them.These deviations are typically more pronounced for those z ≪ z * , limiting the applicability of the CS in the context of the expanded mp-NLP problem.Solution accuracy is covered in greater detail in Section 2 of Appendix 1, where it is shown that the CS delivers feasible optimizers for all z ∈ IR p .

Basic explicit solution
Explicit solutions may be obtained for any active set by setting y directly in Equations 6 and 7 and listing their solution fragments.An improved method-ology for the calculation of the basic explicit solution (BES) is proposed in this section, delivering critical regions in the context of the transformed mp-NLP problem only.Sections 3.2.1 -3.2.5 discuss the supporting concepts of Algorithm 2, which is presented in Section 3.2.6.

Constraints status assessment
Without prior assessment, all p inequality constraints of the transformed mp-NLP may be inactive or active in distinct regions of Θ, from where the theoretical maximum number of active sets comprising its solution amounts to 2 p .To reduce problem complexity it is desirable to conclude if any, and which, constraints are always inactive or always active, thus potentially reducing significantly the total number of active sets and critical regions to assess during the calculation of the BES.This can be achieved by adapting Equation 6as follows: where j is any integer between 1 and p, thus enforcing the corresponding parametric edge (inactive constraint) on the definition of the permissible space; z = F θ and the bounds of Θ are also enforced on this problem statement.If a solution exists for Equation 9, there is at least one θ ∈ Θ, where the j th constraint is inactive.Conversely, if no solution exists, there is no θ ∈ Θ, where the j th constraint is inactive at the optimal solutions; in other words this constraint is always active in Θ.
An equivalent statement is defined to test parametric edges associated with active constraints; it suffices to update y j = 1 in Equation 9, and in this case: if no solution exists for this problem, the j th constraint is always inactive in Θ.The total numbers of always active/always inactive constraints are denoted as 0 ≤ N a ≤ p and 0 ≤ N i ≤ p, respectively.Equation 9 may be recast equivalently as a Mixed Integer Linear Programming (MILP) problem (checking feasibility to reach one of the conclusions above).Details are discussed in Section 3 of Appendix 1.

Critical region dimensionality
All critical regions are implicitly defined in Equation 6 via a unique combination of p column vectors of V z,i and V z,a , spanning them in Z. Their dimensions in this space depend exclusively on the linear independence between their p vectors of directions: given an active set y and the corresponding matrix of column vectors from V z,i and V z,a -denoted as V z,y -the dimension of its critical region in Z is obtained trivially from rank(V z,y ).
Since V z,i is the identity matrix of size p (all vectors are orthogonal to each other), critical region dimension may alternatively be calculated from the subset of active constraints for a given active set y.In mp-NLP, optimizer edges define nonlinear semi-infinite lines in X, and thus the estimated matrices V x and V z,a via Algorithm 1 depend on the defined parameters space.A more robust way of checking dimensionality consists of checking linear independence between the relevant rows of A. This is achieved as follows: (i) calculate rank(A z,y ), where A z,y denotes the set of rows of A matching the active constraints of y; (ii) if rank(A z,y ) = p j=1 y j , the corresponding critical region is full dimensional in Z.

A hybrid partitioning strategy
A new strategy for critical region construction is presented, making use on the one hand of the nonnegative combinations of the gradients of parametric edges as presented in Equation 6, and on the other hand incorporating, selectively, convex hull constraints in their definition.Given an active set y, such that constraints c J1 , . . ., c Jj are active, and constraints c K1 , . . ., c K k are inactive (where the corresponding critical region is now expressed as follows: (10) where s ∈ IR j+1 ≥0 and t ∈ IR k ≥0 span the parametric edges for active and inactive constraints of y, respectively, in place of l.The set of points z * ,J1 . ..+z * ,Jj are obtained from z = Ax − b, using the set of optimizers from the corresponding optimizer edges (Step 3A.2 in Algorithm 1).This partitioning strategy is entirely equivalent to the CS, except that an additional constraint ( j+1 i=1 s i = 1) is now enforced in Equation 10, and which limits the extents of critical regions via the set of parametric edges associated with active constraints to promote solution accuracy.An equivalent definition for Equation 10 using matrix notation and the matching optimizer functions are presented in Equations 11 and 12, respectively.
where matrices in Equations 11 and 12 are denoted as M CR and M OF , respectively.A graphical illustration of the partitioning strategy via Equation 11and a discussion on the theoretical consistency between Equations 11 and 12 are presented in Section 4 of Appendix 1.

Optimal active set detection
The following LP enables to conclude if a given active set y defines a critical region in Θ: where the equality constraints are set from the corresponding critical region of y, via Equation 11.If Equation 13 returns a feasible solution, there is at least one θ ∈ Θ where active set y is detected as the optimal solution.

Parametric edges scope
The selection of the reference optimizers and parameters in Step 3A.2 of Algorithm 1 (x * ,j and z * ,j with j = 1, . . ., p) becomes more important in the context of Equation 11.For instance, in Figure 2 (B), the convex combinations enabled via Equation 11 for all active sets cover the full extent of the space of parameters (depicted in Figure 5), thus making the selected reference points an adequate choice.In some mp-NLP problems, this may not be the case; this is illustrated with a second example in Figure 7 (B), where a small triangle of the transformed space around [0, 0] is not covered via any of the permitted convex combinations for active set {c 1 , c 2 }, as expected.To accommodate for this scenario, Step 3A.2 in Algorithm 1 includes a user-defined vector of slacks δ, which for all j = 1, . . ., p, negatively increments z min j , thus ensuring that the reference points calculated in this step cover the full space of parameters via their critical regions (Figure 7 -A).
Finding δ such that it covers as tightly as possible the parameters space presents several challenges.We argue that a simpler approach towards setting δ is a better compromise between solution accuracy and algorithm complexity: for all j = 1, . . ., p, δ j = α(z min j − z * j ), where α denotes a relative increment parameter with respect to the initial estimation of z min j (α ≈ 0 − 0.1).

Critical region mapping
The calculation of explicit solutions in Multiparametric Programming is generally based on the exploration of Θ, where a set of critical regions are calculated in sequence until the so-called rest space is empty [5], [11].A new paradigm for critical region mapping is presented here, based exclusively on Equation 6.A brute force approach would consist of enumerating all their 2 p active set and delivering the matching critical regions via Equation 11, which would generally result in a very expensive option.
Equation 6 enables a more efficient mapping as hinted in Sections 3.2.1 -3.2.5.First, by defining the 2p MILPs for the 2p parametric edges via Equation 9, the number of candidate active sets reduces to 2 p−N i −Na : all candidate active sets are such that the N i and N a fixed coordinates of y are set the constant values equal to 0 and 1, respectively.When all 2p MILPs are infeasible, Θ defines an infeasible region and in this case, the BES includes 0 active sets.
The number of candidates may be further decreased by excluding all active sets necessarily defining low-dimensional critical regions.In line with the discussion in Section 3.2.2, this occurs for any given active set where the number of active constraints exceeds the number of optimization variables.An active set enumeration strategy is proposed to exclude them, where the p − N i − N a free coordinates of y are selectively set to 0 or 1, and where up to n − N a of these coordinates are equal to 1.The enumeration begins with i = 0 free coordinates equal to 1; i is then incremented sequentially to deliver the full set of candidate active sets, which amounts to a total of These active sets are then tested such that only those defining full-dimensional critical regions detected in Θ are listed in the BES.All steps are summarized in Algorithm 2.

Algorithm 2
Step 1: Initialize vectors u i and u a with all coordinates equal to 0, and vector y ref with all coordinates equal to -1 (a) (all vectors of size p).
Step 6: Set y to the j th active set in the list of candidates.
Step 7: Calculate rank(A y ); if rank(A y ) = p j=1 y j , calculate the corresponding critical region via Equation 11 (c) .Else, go to Step 9.
Step 8: Solve the LP in Equation 13; if a feasible solution is found, add the j th active set and its critical region to the BES.Else, go to Step 9.
Step 9: If j < n−Na i=0 C p−Ni−Na i , set j = j + 1 and go to Step 6.
Step 10: Else, define the general optimizer functions via Equation 15 (details in Section 3.2.7). (c) Active sets defining critical regions of dimension p − 1 are also saved.All candidate active sets listed in Step 6 are first compared against these low-dimensional active sets; any candidate active set sharing the same active constraints of one of these low-dimensional active sets is also associated with a low-dimensional critical region, and excluded from the BES (skip Steps 7 and 8).

Offline & online solutions
Instead of recording critical regions in accordance with the notation in Equation 11, storing their inverse matrices in line with Equation 14 is presented as an alternative.From an offline perspective, the first option is preferable since it dismisses the calculation of an inverse matrix per detected active set.In an online context, the second option enables a faster route for critical region identification, since this becomes a simpler function evaluation problem.
The notation used for matrices M CR (Equation 11) and M OF (Equation 12) is such that all active/inactive constraints are captured in the central/right blocks of these matrices, respectively (sensitive/insensitive transformed parameters).In practice, all columns of M CR and M OF are sorted and saved in the original order; vector [s|t]] T is sorted accordingly and denoted as γ.This enables a general and computationally inexpensive representation of all optimizer functions for all active sets using Equation 15:

Refined explicit solution
The refined explicit solution (RES) builds on the BES to improve solution accuracy.The proposed strategy comprises three stages: first, a set of additional points per edge is calculated (Algorithm 3); then, a set of more precise partitions is obtained from these points (Algorithm 4); at last, these partitions are broken down in smaller partitions/critical regions, if necessary until their optimizer functions deliver optimizers within a pre-defined error tolerance (Algorithm 5).

Edges refinement
All edges are implicitly defined in the CS and the BES as single line segments.These are now halved and tested sequentially using as many segments as necessary via a set of additional points to improve their accuracy.Their calculation is summarized in Algorithm 3.

Algorithm 3
Step 1: Define the list of p single constraint active sets.Set j = 1 (active set: {c 1 }).
Step 2: If u i j = 1 go to Step 7 (a) .Step 3: Initialize the list of intervals to process with: z min j − δ j ≤ z j ≤ z * j .
Step 4: While there is at least one interval to process, take the first interval in this list (denoted as z lb j ≤ z j ≤ z ub j ) and: Step 4.1: Calculate z center j = (z lb j + z ub j )/2, and solve the KKT conditions for active set {c j } at z j = z center j , to obtain optimizer x * ,exact .
Step 4.3: Calculate the approximation error ϵ Step 4.4: Remove the first interval from the list of intervals.
Step 4.5: If ϵ > ζ edges , where ζ edges is a pre-defined error tolerance: (i) add x * ,exact to the j th optimizer edge, (ii) break the current interval in two intervals: z lb j ≤ z j ≤ z center j and z center j ≤ z j ≤ z ub j , and (iii) add the two intervals to the list of intervals.Go to Step 4 (while loop).
Step 5: For all optimizers added to the j th optimizer edge in Step 4.5, calculate the matching reference points z = Ax * ,exact − b, and add them to the j th parametric edge (active constraint).
Step 6: If j < p, set j = j + 1 (active set: {c j }) and go to Step 2.
Step 7: For all j = 1, . . ., p, order the full set of representative points for the j th optimizer and parametric edges from the highest to the lowest z j .
(a) From Section 3.2.1, if this condition holds, the current edge is not used in the calculation of the BES, and its refinement is unnecessary in this case. (b) Formally, the optimizer function would be calculated first via interpolation of x lb (at z lb j ) and x ub at (z ub j ), and then x * ,estim obtained at z center j .The interpolation step is dismissed and the optimizer obtained directly via x lb and x ub .

Initial set of critical regions
Making use of the points calculated in Section 3.3.1,a new set of critical regions matching more closely the actual optimal active set map are obtained, where each active set now includes one or more critical regions.All steps are summarized in Algorithm 4.

Algorithm 4
Step 1: If no new points are added in Algorithm 3, terminate: the initial set of critical regions matches the BES.
Step 2: Define the list of all N as active sets identified in the BES.Set j = 1 (first active set).
Step 3: Define the list of all reference points for the j th active set from the points in the associated parametric edges, excluding those points at their last positions (a) ; list them as "unprocessed" (b) .
Step 4: Define the first critical region via Equation 11, using z * and the vectors of representative points per each of the relevant parametric edges at their second positions (c) .Update z * to the "processed" status.Set the current positions of all parametric edges to their second positions.
Step 5: While there are any "unprocessed" reference points: Step 5.1: Create a new region using a reference point per edge at their current positions and add a point from one of the associated parametric edges at the next available position (d) .
Step 5.2: At the same parametric edge used in Step 5.1 (ii), update the status of the point at the current position to "processed", and increment this position by 1. Go to Step 5 (while loop).
Step 6: If j < N as , set j = j + 1 (next active set in the list) and go to Step 3. (a) All points in edges are ordered via Step 8 of Algorithm 3: for all j = 1, . . ., p, z * is at the first position, and z min j − δ j at the last position. (b) The "processed" and "unprocessed" statuses reflect if points at edges are available or not, respectively, for the purposes of creating critical regions. (c) All gradients for all associated inactive constraints are included accordingly in Equation 11. (d) To promote a balanced construction of partitions, the parametric edge including the highest number of "unprocessed" points is selected in this step.

Critical regions check and refinement
Using the full set of initial critical regions delivered in Algorithm 4, two checks are made: (i) detecting which are included in the parameters space, and (ii) testing the accuracy of their optimizer functions.Accordingly, critical regions are excluded or broken down into smaller partitions until optimizers are estimated within a pre-defined tolerance.Algorithm 5 summarizes all calculations.

Algorithm 5
Step 1: Define the list of all N cr initial critical regions identified in Algorithm 4. Set j = 1 (first critical region).
Step 2: Initialize the list of partitions to check in the current iteration (j) using the j th critical region in the list of critical regions.
Step 3: While there is at least one partition to check, take the first partition in this list and: Step 3.1: Check if the partition is included in the parameters space via Equation 13.If not, remove the partition from the list of partitions to check and go to Step 3 (while loop).
Step 3.2: Calculate z center , as the average of all reference points used in its definition, and solve the KKT conditions for its active set at z = z center , to obtain optimizer x * ,exact .
Step 3.3: Define its optimizer functions using the matching optimizers via Equation 12.
Step 3.4: Calculate the estimated optimizer from the optimizer functions at z center , x * ,estim , as the average of the optimizers used their definition (a) .
Step 3.5: Calculate the approximation error ϵ Step 3.6: Remove the first partition from the list of partitions.
Step 3.7: If ϵ > ζ partitions , where ζ partitions is a pre-defined error tolerance: (i) break the current partition in β + 1 partitions (b) via all the combinations of β of its reference points with z center , and (iii) add these partitions to the list of partitions to check.Go to Step 3 (while loop).
Step 3.8: Else, if ϵ ≤ ζ partitions , add the partition/critical region and its optimizer functions to the RES.Go to Step 3 (while loop).
Step 4: If j < N cr , set j = j + 1 (next initial critical region in the list) and go to Step 2. (a) See note (b) of Algorithm 3 for details. (b) Where β denotes the number of active constraints in the current active set/partition.

Computational complexity
The computational complexity of Algorithms 1-5 is summarized in Table 1.This is expressed via the number of auxiliary optimization problems required per algorithm, which contribute most significantly to their execution.
Algorithm 1 is likely the lightest mp-NLP algorithm developed to date, where complexity scales linearly with the number of constraints instead of the more common exponential dependency reported in the field [12].Its solution includes only 3 matrices (via Equations 6 and 7) in place of a set of individual critical regions and their optimizer functions.
In Algorithms 2-5, it is not possible to quantify exactly the number of auxiliary optimization problems, since these algorithms depend on the definition of the parameters space and problem nonlinearity: lower bounds are listed where applicable.Algorithm 2 requires the validation of all critical regions which amounts to roughly the number of optimal active sets detected in Θ (N as ).In Algorithm 3 all edges are refined which includes at least the assessment of a middle point per edge.Algorithm 5 checks and breaks all partitions where applicable, which includes at least the set of initial critical regions (N cr ) derived in Algorithm 4.
We remark that the number of NLPs solved as listed in Table 1 originate from the KKT conditions; these define systems of nonlinear equations and not formal optimization problems (subject to a set of inequality constraints).Their solutions are cheaper to obtain than solving an equivalent NLP (setting θ in Equation 3, finding the optimal active set and solving for x).

Motivating example
The motivating example (Equation 1) is now solved in accordance with the steps highlighted in Figure 4 to deliver the CS, BES and RES.The transformed and expanded mp-NLPs are defined in Equations 2 and 16, respectively.The following settings on user-defined parameters for algorithm execution are employed: ∆z = 0.05, δ = [0, 0], ζ edges = ζ partitions = 10 −2 .Detailed, stepby-step calculations of all solutions including graphical insights are presented in Section 5 of Appendix 1.   13confirms that all critical regions are included in Θ and added to the BES (Step 8).The general optimizer functions are obtained in Step 10 (Equation 19).
All critical regions in the BES are listed in Table 2 and depicted in Figure 5; note that regions only extend to infinity via the set of parametric edges associated with inactive constraints.From Algorithm 3: the list of all single constraint active sets includes: {c 1 } and {c 2 } (Step 1).u i = [0, 0], and no edge is dismissed from the remaining refinement steps (Step 2).The two edges are then broken down sequentially in Steps 3 and 4: two additional optimizers are added to the first optimizer edge and none is required for the second edge.In Step 5, the corresponding points for the first parametric edge are also added to improve its accuracy.All points are sorted in Step 7. The complete set of points per edge is summarized in Table 3.Initial set of critical regions: From Algorithm 4: the termination condition in Step 1 is not satisfied; the list of all active sets validated in the BES include: {}, {c 1 }, {c 2 } and {c 1 , c 2 } (Step 2).All initial partitions are created in Steps 3-6.In the case of active sets {} and {c 2 }, since no points were added to the second edge, their partitions match those of the BES.Concerning active sets {c 1 } and {c 1 , c 2 }, the new points listed in Table 3 are also used to calculate the initial critical regions.These are illustrated in Figure 6, where a sequential increment on the position of the first edge delivers their critical regions.
Fig. 6 Map of initial critical regions for motivating example (RES).

Critical regions check and refinement:
From Algorithm 5: eight initial critical regions are listed for check and refinement as depicted in Figure 6 (Step 1).All critical regions are checked in Step 3.1 and confirmed to be included in Θ; x * ,exact , the optimizer functions, x * ,estim j and ϵ are calculated for all critical regions in Steps 3.2 -3.5, with ϵ < ζ partitions = 10 −2 .The condition in Step 3.7 is not applicable to any region; all critical regions and their optimizer functions are added to the RES in Step 3.8.No additional critical region partitioning is required: the final set of critical regions matches the initial regions delivered in Algorithm 4. The RES is presented in Table 4.

Benchmark mp-NLP
The following benchmark mp-NLP problem is presented in [13], [9], [3]: The transformed and expanded mp-NLP problems are defined via the trivial recasting steps highlighted earlier (omitted for brevity).Parameters ∆z = 0.05, δ = [0.05,0.05, 0.05, 0.05], ζ edges = 10 −5 and ζ partitions = 10 −6 are set for algorithm execution.The objective function in Equation 20 is not convex: the Hessian matrix is positive definite only when x 1 > −2/3.All solutions are subject to this requirement, and their validity is discussed in sequence.

Compact solution
The CS is delivered via Algorithm 1 and presented in Equations 21 and 22: When z 1 ≪ 0.573 or z 2 ≪ 0.393, x 1 < −2/3.The convexity requirement does not hold in this case, and the CS is not valid in the context of the expanded mp-NLP (Z defined unbounded).On the other hand, all optimizers for all z ∈ Z in the transformed mp-NLP satisfy x 1 > −2/3, thus preserving the convexity requirement and ensuring the validity of the CS in this context.

Basic explicit solution
Algorithm 2 delivers the BES.In this case, the MILPs for the 3 rd and 4 th active parametric edges return infeasible: constraints c 3 and c 4 are always inactive in Θ, from where only active sets {}, {c 1 }, {c 2 } and {c 1 , c 2 } are relevant.The list of critical regions is presented in Table 5; the general optimizer function applicable to all regions is presented in Equation 23.  0.573 1.000 0.000 0.000 0.000 0.393 0.000 1.000 0.000 0.000 −0.786 0.000 0.000 1.000 0.000 −1.500 0.000 0.000 0.000 1.000 1 0 0 0 0    {c1}    0.573 −0.050 0.000 0.000 0.000 0.393 0.000 1.000 0.000 0.000 −0.786 −0.633 0.000 1.000 0.000 −1.500 −1.184 0.000 0.000 1.000 computational effort in this case.Specifically, for all z ∈ Z, feasible solutions are obtained, with negligible optimality loss, and where the critical regions' map in Figure 7 is very similar to those reported for mp-QA and GVS in [3].The additional refinements in BES and RES deliver explicit solutions with marginal accuracy gains with respect to the CS, where computational complexity is similar to mp-QA, though not requiring the solution of any auxiliary mp-QP problem.

Concluding remarks
A new solution strategy for mp-NLP is proposed in this work.The critical step enabling these developments is parameter transformation z = F θ: in the transformed parameters space, all critical regions share a unique vertex and are bounded by a set of 2p 1-dimensional edges, defining the backbone of critical region maps.This enables the development of an enhanced partitioning strategy where critical region maps are obtained more consistently with the actual optimal active sets detected in the parameters space, optimizer sensitivities are captured more accurately, and overall promoting low algorithm and solution complexity.While the CS enables the calculation of optimizers for all z ∈ IR p , this solution is of limited applicability in the context of the expanded mp-NLP; since optimizer edges are nonlinear, significant deviations to the actual optimal optimizers are expected when z ≪ z * .The expanded mp-NLP is here presented mainly as a theoretical extension of the mp-QP case in [10].In practical terms, this work is focused mainly on obtaining the solution of the transformed mp-NLP, which is a more realistic objective for this class of problems.
Three routes are presented to deliver the solution to transformed mp-NLP problems, which are built incrementally for improved accuracy.The CS is the first of these, where optimizer sensitivities are estimated from a set of representative points within the parameters space to promote accuracy.This is generally a good first estimation, provided that optimizer edges are mildly nonlinear.The BES enables a fully declarative presentation of all solution fragments, more consistent with the standard practice in the field.The solution includes all full-dimensional critical regions detected in the parameters space, now subject to convex-hull restrictions (Equation 10) to prevent z ≪ z * .A single optimizer function delivers the optimizers for all critical regions.We remark that the BES offers important insights from the analysis of their maps of critical regions, as illustrated in Figure 5: in {}, no constraint is active, and thus the matching optimizer function is insensitive to z 1 and z 2 ; its critical region extends to infinity since all their points denote an extension of the feasible space beyond z * with no impact on the associated optimizers.An equivalent analysis may be undertaken for all active sets in mp-NLP problems of any size.
The key drawback of the CS and BES resides in the fact that no steps are enforced to check and improve solution accuracy.The RES deals with this

Fig. 2
Fig. 2 Critical regions for motivating example: (A) exact optimal active set map; (B) 1dimensional edges between critical regions estimated from optimizer sensitivities.
(a) Notation for y ref .-1:inactive or active; 0: always inactive; 1: always active, constraints in Θ.(b)All fixed coordinates of all candidate active sets are set equal to the matching coordinates of y ref .All candidate active sets are enumerated sequentially based on the number of active constraints: first, the unique active set where i = 0 free components of y are set to 1; then, the p − N i − N a active sets where i = 1 free components of y are set to 1, and so on.

Fig. 5
Fig.5Map of critical regions for motivating example (BES).The transformed parameters space is bounded by the black dotted lines

Table 2
Basic explicit solution for motivating example.

Table 3
Reference points included in the optimizer and parametric edges of motivating example.

Table 4
Refined explicit solution for motivating example.

Table 5
Basic explicit solution for benchmark problem.