1 Introduction

We propose a new decomposition method to solve multistage non-convex mixed-integer (stochastic) nonlinear programming problems (MINLPs), i.e., optimization problems modeling a sequential decision making process. Continuous and integer decision variables and possibly non-convex objective functions and constraints are allowed for any of the T stages.

If multistage (stochastic) problems are too large to be solved by off-the-shelf solvers, then tailored solution techniques are required. One example are decomposition algorithms making use of the specific sequential and block-diagonal structure of the constraints. The problems are decomposed into a large number of smaller but coupled subproblems which are solved iteratively. One of the most common decomposition methods is Benders decomposition, introduced by Benders [6] for linear programs. Since then, it has been enhanced to several more general cases, such as convex problems (generalized Benders decomposition (GBD) [19]), two-stage stochastic linear problems (L-shaped method [49]) and multistage (stochastic) linear problems (nested Benders decomposition (NBD) [8]). To mitigate the curse-of-dimensionality related to NBD in the stochastic case, Pereira and Pinto introduced its sampling-based variant stochastic dual dynamic programming (SDDP) [35], which was followed by various extensions [23, 37].

The basic principle of NBD is to use the dynamic programming formulation of a given multistage problem. For each stage \(t \in \left\{ 1,\ldots , T \right\} \), a parametric subproblem is considered. This subproblem contains only those constraints, variables and parts of the objective function related to this specific stage, plus a value function determining the optimal value of all following stages for a given stage t solution. Since the value functions are not known in advance, they are iteratively approximated with linear cutting-planes. However, this approach requires the value functions to be convex. Therefore, most decomposition methods for multistage problems cover linear programs, as their value functions are guaranteed to be piecewise linear and convex.

However, in many applications, also integer variables or non-linearities occur naturally. In such case, the value functions are no longer convex and may also no longer be continuous. Therefore, the classical Benders approach fails, as it is impossible to construct a tight convex polyhedral approximation [47].

Thus, more sophisticated approaches have been developed to use Benders-type decomposition methods for non-convex MINLPs, mostly for the two-stage case. Li et al. propose an extension of GBD to the non-convex case for two-stage stochastic MINLPs with functions separable in integer and continuous variables [29, 30]. In [28], a branch-and-cut framework is presented, where in each node Lagrangian and generalized Benders cuts are constructed. Related methods are proposed in [26, 33]. All these methods have not been generalized to the multistage case yet.

To handle non-convexities in multistage problems, a common idea is to use convex relaxations of the value function, e.g., by relaxing the integrality constraints for MILPs or by convexifying nonlinear terms in a static manner. Dynamically convexifying the non-convex value functions using Lagrangian relaxation techniques allows for a polyhedral approximation by Lagrangian cuts [10, 45, 46]. None of these discussed approaches can guarantee to compute an optimal solution for non-convex multistage problems, though.

Only recently, some substantial progress has been made in generalizing the Benders decomposition idea to multistage problems with non-convex value functions directly. In [36], step functions are used, instead of cutting-planes, to approximate the value functions, presuming their monotonicity.

For the mixed-integer linear case, the stochastic dual dynamic integer programming (SDDiP) approach is proposed [55]. SDDiP is an enhancement of NBD and SDDP which allows the solution of multistage (stochastic) MILPs in case of binary state variables. The method is based on generating special Lagrangian cuts, which reproduce the lower convex envelope of the value function. As the latter is piecewise linear and exact at binary state variables, strong duality is ensured and the problem is solved to global optimality in a finite number of iterations. SDDiP is applied to multistage unit commitment in [54]. It is also applied to a problem containing non-convex functions in context of hydro power scheduling by using a static binary expansion of the state variables and a Big-M reformulation [22].

As long as the value functions are assured to be Lipschitz continuous and some recourse property is satisfied, the requirement of binary state variables can be dropped, as is shown by the Stochastic Lipschitz Dynamic programming (SLDP) method in [1]. Here, two types of non-convex Lipschitz continuous cuts are introduced: reverse-norm cuts and augmented Lagrangian cuts.

In [52], Zhang and Sun present a new framework to solve multistage non-convex stochastic MINLPs, generalizing both SDDiP and SLDP. Similarly to [1], nonlinear generalized conjugacy cuts are constructed by solving augmented dual problems. Moreover, as Lipschitz continuity is not assured for the value functions, a Lipschitz continuous regularized value function is considered within the decomposition method.

In this article, we propose a new method to solve multistage non-convex MINLPs to proven global optimality, which we refer to as non-convex nested Benders decomposition (NC-NBD). The method combines piecewise linear relaxations, regularization, binary approximation and the SDDiP Lagrangian cuts in a unique and dynamic fashion. Its basic idea is to solve a MINLP by iteratively improved MILP outer approximations, which in turn are solved using a NBD-based decomposition scheme similar to that in [52]. The binary and piecewise linear approximations are dynamically refined.

In particular, the original MINLP is outer approximated by MILPs, which are iteratively improved in an outer loop. Those MILPs are obtained by piecewise linear approximations of all occuring nonlinear functions, which is an established method in global optimization [50]. In general, using MILP relaxations is a common approach to global optimization solvers [27, 32, 53].

In an inner loop, the multistage MILPs are solved to approximate optimality in finitely many steps. This is achieved using a NBD-based decomposition method. In a forward pass through the stages, trial solutions for the dynamic programming equations are determined. As Lipschitz continuity of the value functions is not guaranteed, this is done solving a regularized forward pass problem, as proposed in [52]. For a sufficiently large, but finite parameter, the regularization is exact [14, 52], so that still the desired MILP is solved.

In a backward pass through the stages, nonlinear non-convex cuts are constructed to approximate the non-convex value functions of the MILP. To this end, we make use of a binary approximation of the state variables in the subproblems. As proven in [55], for MILPs with binary state variables we obtain (sufficiently) tight cuts by solving Lagrangian dual problems. The constructed linear cuts are then projected back to the original state space, yielding a nonlinear, non-convex, but Lipschitz continuous approximation of the value functions. The binary approximation is refined dynamically within the inner loop if required. By careful construction, all existing cuts remain valid even with such refinements.

Once the MILP approximation is solved to approximate optimality, the cut approximation of the value functions is used in the outer loop to determine bounds for the optimal value of the original MINLP. If the bounds are sufficiently close, the algorithm terminates with an \(\varepsilon \)-optimal solution. Otherwise, the piecewise linear approximations are refined, and thus the approximating MILP is tightened. Again, by careful construction it is ensured that all previously generated cuts remain valid.

To our best knowledge, the above concepts have not been combined in this dynamic way to solve multistage non-convex MINLPs yet. In that regard, our work also differs significantly from the aforementioned solution techniques.

Our proposed decomposition scheme uses the same regularization technique and similar convergence ideas as in [52]. However, a fundamental difference is that we only apply this technique to solve MILP outer approximations of the original MINLP. This has the advantage that in our framework MINLPs have to be solved only occasionally. In contrast, in [52], MINLPs are assumed to be solved by some oracle in each iteration and cuts are generated directly for the MINLP, which is computationally challenging. Moreover, contrary to our approach, the method in [52] does not require recourse assumptions, but in return it only allows for state variables in the objective function.

In contrast to SDDiP [55] and SLDP [1], we solve MINLPs, and thus consider a larger solution framework with an inner and an outer loop. However, even the inner loop, in which MILPs are solved, differs from both approaches.

To solve MILPs with non-binary state variables using SDDiP, it is proposed to apply a static binary approximation [22, 55]. This way, the original MILP is replaced by an approximating problem with only binary state variables. It can be shown that for a sufficiently small approximation precision, i.e., an sufficiently large number of binary variables, an \(\varepsilon \)-optimal solution of an MILP can be determined with this approach under some recourse assumption [55]. However, for a given problem at hand, it is not necessarily clear in advance how this precision has to be chosen, as knowledge on a problem-specific Lipschitz constant is required. This becomes even more challenging in our framework, where an MINLP is iteratively approximated by MILPs, for which the required precision may change. On the contrary, within NC-NBD the binary approximation is refined dynamically if required.

More crucially, in NC-NBD the binary approximation is applied temporarily only to derive cuts in the backward pass. These cuts are then projected back to the original state space. This construction has a few key advantages: Firstly, it is ensured that cuts remain valid even if the binary precision is refined later on. Secondly, the original state variables remain continuous and are not limited to values which can be exactly represented by the binary approximation. This, in turn, ensures that the true MILPs are solved in the inner loop. Consequently, the generated cuts are valid for the value functions of these MILPs and, due to their relaxation property, also the original MINLP. Analogously, the obtained lower bounds are valid for the corresponding optimal values. Importantly, this is not true for SDDiP with static binary approximation, where the state space is permanently modified and only approximations of the true MILPs are solved in the inner loop. In our approach to solve MINLPs, it is crucial to determine guaranteed valid cuts for the value functions in both loops. Therefore, SDDiP cannot be used effectively in this setting.

Our cut generation approach also differs from that in SLDP [1] (and also [52]), where augmented Lagrangian problems are solved to determine nonlinear cuts. While our method comes at the cost of introducing additional (binary) variables and constraints compared to those approaches, e.g., for the cut projection, we avoid solving dual problems containing nonlinear penalization in the objective. Such penalization may be disadvantageous as it prevents decomposition of the primal problems which are solved in the solution process of the dual problem. Additionally, in contrast to SLDP [1], we do not assume continuously complete recourse, but only the weaker complete recourse, as we circumvent the requirement of Lipschitz continuity of the true value functions by regularization.

The main contributions of this paper are as follows:

  1. (1)

    We present the non-convex nested Benders decomposition (NC-NBD) method to globally solve general multistage non-convex MINLPs. The method combines piecewise linear relaxations, regularization, binary approximation and cutting-planes techniques in a unique way. In contrast to existing approaches, all approximations are improved dynamically where and when it is reasonable. To our knowledge, this is the first decomposition method for general multistage non-convex MINLPs.

  2. (2)

    A crucial requirement using dynamic refinements is to ensure that all previously determined cuts remain valid within the refinement process and have not to be generated from scratch. We ensure this by a special cut projection and careful choice of the MILP relaxations.

  3. (3)

    We prove that the proposed NC-NBD method converges to an \(\varepsilon \)-optimal solution of P in a finite number of steps under some mild assumptions.

  4. (4)

    We provide first computational results of applying NC-NBD to moderate-sized instances of a unit commitment problem to illustrate its efficacy.

To enhance readability, we focus our discussions solely on deterministic MINLPs. However, the presented NC-NBD idea can also be applied to stochastic programs with stagewise independent and finite random variables.

The remainder of the paper is organized as follows. We present the considered problem formulation and assumptions in Sect. 2. Then, we introduce the NC-NBD with its different steps in Sect. 3, before presenting convergence results in Sect. 4. Afterwards, we provide computational results for instances of a simple unit commitment problem in Sect. 5. We conclude with Sect. 6.

2 Problem formulation

We consider the following multistage non-convex MINLP problems

$$\begin{aligned} \begin{aligned} (\varvec{P}) \quad v := \min \limits _{x_1, \ldots , x_T, y_1, \ldots , y_T} \quad&\sum _{t=1}^T f_t(x_t,y_t) \\ \text {s.t.} \quad&(x_t, y_t) \in M_t(x_{t-1})&\forall t=1,\ldots ,T. \end{aligned} \end{aligned}$$

Here \(t = 1,\ldots ,T\) denotes the different stages with the final stage \(T \in {\mathbb {N}}\). For each stage t, the decision variables can be separated into mixed-integer state variables \(x_t \in {\mathbb {R}}_+^{n_t^1} \times {\mathbb {Z}}_+^{n_t^2}\) and local variables \(y_t \in {\mathbb {R}}^{n_t^3} \times {\mathbb {Z}}^{n_t^4}\), with \(x_0 = 0\). We define \(n_t := n_t^1 + n_t^2\) as the number of state variables. The sets \(M_t(x_{t-1})\) appearing in the constraints for each stage t are defined by

$$\begin{aligned} M_t(x_{t-1}) := \left\{ (x_t, y_t) \in X_t \times Y_t \ : \ g_{t}(x_{t-1}, x_t, y_t) \le 0, \ h_{t}(x_{t-1}, x_t, y_t) = 0 \right\} . \end{aligned}$$

\(X_t\) and \(Y_t\) denote box constraints; \(X_0 := \{ 0 \}\). As such, \(X_t\) and \(Y_t\) are compact sets for all stage-t variables. All functions \(f_t: X_t \times Y_t \rightarrow {\mathbb {R}}\), \(g_t: X_{t-1} \times X_t \times Y_t \rightarrow {\mathbb {R}}^{m_t^1}\) and \(h_t: X_{t-1} \times X_t \times Y_t \rightarrow {\mathbb {R}}^{m_t^2}\) are well-defined on their domains.

To exploit its multistage structure, we solve \((\varvec{P})\) by some extension of NBD. NBD makes use of the dynamic programming formulation of \((\varvec{P})\), where each stage-t subproblem, \(t=1,\ldots ,T\), can be denoted by

$$\begin{aligned} \begin{aligned} (\varvec{P_t(x_{t-1})}) \quad Q_t(x_{t-1}) := \min _{x_t, y_t, z_t} \quad&f_t(x_t,y_t) + Q_{t+1} (x_{t}) \\ \text {s.t.} \quad&(z_t, x_t, y_t) \in M_t \\&z_t = x_{t-1}, \end{aligned} \end{aligned}$$

with the value function \(Q_t(\cdot )\) of stage t and \(Q_{T+1}(\cdot ) \equiv 0\). Note that \(x_t\) links different stages, i.e., \(x_t\) is a decision variable for \((\varvec{P_t(x_{t-1})})\) and a parameter for \((\varvec{P_{t+1}(x_{t})})\). For the first stage, we obtain that \(Q_1(x_0) = v\) with \(x_0 \equiv 0\). Importantly, subproblem \((\varvec{P_t(x_{t-1})})\) is enhanced by introducing local copies \(z_t\) of the state variables \(x_{t-1}\) and the copy constraints \(z_t = x_{t-1}\). Those copy constraints will prove crucial for the cut generation later on. Taking into account the local copies, we define

$$\begin{aligned} M_t := \left\{ (z_t, x_t, y_t) : z_t \in X_{t-1}, (x_t, y_t) \in M_t(z_t) \right\} . \end{aligned}$$

As the subproblems \((\varvec{P_t(x_{t-1})})\) are non-convex MINLPs, the value functions \(Q_t(\cdot )\) may be non-continuous and non-convex, two detrimental properties for Benders decomposition approaches. To ensure that the value functions \(Q_t(\cdot )\) are at least lower semicontinuous (l.sc.), we make the following technical assumptions:

(A1).:

For all \(t=1,\ldots ,T\),

(a):

the functions \(f_t\) are Lipschitz continuous on \(X_t \times Y_t\),

(b):

the functions \(g_t\) and \(h_t\) are continuous on \(X_{t-1} \times X_t \times Y_t\).

(A2):

(Complete recourse). For any stage t and any \(\bar{x}_{t-1} \in X_{t-1}\), there exists some \((z_t, x_t,y_t) \in X_{t-1} \times X_t \times Y_t\) which is feasible for \((\varvec{P_t(\bar{x}_{t-1})})\).

As all variables are box-constrained, the feasible set \(M_t(x_{t-1})\) of \((\varvec{P_t(x_{t-1})})\) is bounded. With assumption (A1) and the recourse assumption (A2), all subproblems \((\varvec{P_t(x_{t-1})})\) are feasible and bounded. Analogously, \((\varvec{P})\) is feasible with finite optimal value v. Note that under assumption (A2) we can restrict to generating optimality cuts in NC-NBD without the need to introduce Benders feasibility cuts.

We obtain our required l.sc. property of the value functions \(Q_t(\cdot )\).

Lemma 2.1

Under assumptions (A1) and (A2) the value functions \(Q_t(\cdot )\) are l.sc. for all \(t=1,\ldots ,T\).

Proof

Fixing all integer variables, the l.sc. follows from Exercise 1.19 in [41]. As \(X_t\) and \(Y_t\) are bounded, only finitely many different values can be attained by the integer variables. The minimum of finitely many l.sc. functions is l.sc. \(\square \)

In the next section, we introduce the NC-NBD method, which combines regularization, piecewise linear approximations, binary expansion and special cutting-plane techniques in a unique way to solve \((\varvec{P})\).

3 Non-convex nested Benders decomposition

3.1 The NC-NBD principle

The basic idea of the NC-NBD algorithm is to employ that MILP problems can be solved exactly by enhancements of NBD under certain assumptions and that MINLPs can be outer approximated by MILPs iteratively. Thus, the method consists of two main components. The first component is an inner loop which is used to determine an approximately optimal solution of some MILP outer approximation \((\varvec{\widehat{P}^{\ell }})\) of problem \((\varvec{P})\). This approximation is determined by piecewise linear relaxations of nonlinear functions in \((\varvec{P})\). The second component is an outer loop which refines this outer approximation iteratively (indexed by \(\ell \)) to improve the approximation of the optimal value v of \((\varvec{P})\). The NC-NBD is summarized in Algorithm 1 and illustrated in Fig. 1.

figure a
Fig. 1
figure 1

Conceptual overview of NC-NBD

The inner loop follows the general principle of NBD to solve \((\varvec{\widehat{P}^{\ell }})\). It consists of a forward and a backward pass through the stages \(t=1,\ldots ,T\) in each iteration i. In the forward pass, the stage-t subproblem \((\varvec{\widehat{P}^{\ell }_t(x_{t-1})})\) is approximated in two different ways: The value function \(\widehat{Q}_{t+1}(\cdot )\) of the following stage is replaced by some outer approximation \(\mathfrak {Q}_{t+1}^{\ell i}(\cdot )\). Moreover, a regularization is added to ensure Lipschitz continuity of the corresponding value functions. Thus, regularized subproblems \((\varvec{\widehat{P}_t^{R,\ell i}(x_{t-1})})\) are solved, as proposed in [52], yielding trial solutions \(\widehat{x}_{t-1}^{\ell i}\) and an upper bound \(\overline{\widehat{v}}^{\ell i}\) for \((\varvec{\widehat{P}^{\ell }})\).

In the backward pass, the approximations \(\mathfrak {Q}_{t+1}^{\ell i}(\cdot )\) of \(\widehat{Q}_{t+1}(\cdot )\) are improved iteratively by constructing additional cuts. As the value functions are possibly non-convex, those cuts are nonlinear. Importantly, cuts for \(\widehat{Q}_{t+1}(\cdot )\) are also valid for \(Q_{t+1}(\cdot )\), as the first is an outer approximation of the latter.

In the literature, different ways are proposed to obtain nonlinear optimality cuts and to ensure that the inner loop converges to the optimal value \(\widehat{v}^{\ell }\) of \((\varvec{\widehat{P}^{\ell }})\). One method is to generate reverse-norm cuts [1]. However, this only works if the value functions themselves are Lipschitz continuous which is not guaranteed in our setting. Another, more general method is to solve some augmented Lagrangian dual problem, as proposed in [1, 52].

We propose a third and new method, based on the SDDiP technique [55]. We utilize that we can generate sufficiently tight cuts by solving a Lagrangian dual in a lifted space, where all state variables are binary. Thus, we (temporarily) approximate the state variables with binary ones, construct cuts in the binary space and then project those cuts back to the original space. As we show, these projections can be modeled by mixed-integer linear constraints in the original space. By careful construction, these cuts remain valid even if the binary approximation is refined in later iterations.

In this way, we circumvent solving an augmented Lagrangian dual, which may be even more expensive than solving the classical Lagrangian dual, as with the additional nonlinear term in the objective, the primal problems lose their decomposability. In return, we require more (binary) variables and constraints in the Lagrangian duals and for an MILP representation of our cuts than the approach in [1].

In principle, the MILPs as they occur in the inner loop could also be solved by using SDDiP with a static binary approximation of the state variables [55]. As discussed in Sect. 1, this approach has some properties which prevent an efficient integration into our algorithmic framework, though.

As we show in the next section, for a sufficiently fine binary approximation, the obtained cuts in the NC-NBD provide a sufficiently good approximation at the trial solutions \(\widehat{x}_{t-1}^{\ell i}\). Additionally, the cut approximations \(\mathfrak {Q}_t^{\ell }(\cdot )\) are generated in such a way that they are Lipschitz continuous. This is sufficient to ensure convergence to a globally optimal solution of \((\varvec{\widehat{P}^{\ell }})\).

At the end of the backward pass, a lower bound \(\underline{\widehat{v}}^{\ell i}\) is determined. If \(\overline{\widehat{v}}^{\ell i}\) and \(\underline{\widehat{v}}^{\ell i}\) are sufficiently close to each other, an approximate globally minimal point \(\big ( (\widehat{z}_t^{\ell }, \widehat{x}_t^{\ell }, \widehat{y}_t^{\ell } ) \big )_{t=1,\ldots ,T}\) of \((\varvec{\widehat{P}^{\ell }})\) has been identified and the inner loop is left. Otherwise, further cuts have to be constructed or the binary approximation has to be refined. We discuss this decision in more detail in Sect. 3.3.6.

Once the inner loop is left, subproblems \((\varvec{P_t(x_{t-1}, \mathfrak {Q}_{t+1}^{\ell })})\) are solved to determine trial points \(x_{t-1}^{\ell }\) and an upper bound \(\overline{v}^{\ell }\) to v for the original problem \((\varvec{P})\). If this upper bound is sufficiently close to \(\underline{\widehat{v}}^{\ell }\), the solution \(\big ( (z_t^{\ell }, x_{t}^{\ell }, y_t^{\ell }) \big )_{t=1,\ldots ,T}\) is approximately optimal for problem \((\varvec{P})\). If not, the MILP relaxation \((\varvec{\widehat{P}^{\ell +1}})\) is created by refining \((\varvec{\widehat{P}^{\ell }})\) in the neighborhood of \(\big ( (\widehat{z}_t^{\ell }, \widehat{x}_t^{\ell }, \widehat{y}_t^{\ell } ) \big )_{t=1,\ldots ,T}\) and a new inner loop is started.

As for the inner loop, it is crucial that with these refinements in the outer loop all previously generated cuts remain valid. Otherwise, the cut approximation \(\mathfrak {Q}_{t+1}^{\ell }(\cdot )\) would have to be built from scratch, counteracting the idea of a dynamic solution framework. In the following subsections, we show how such persistent validity can be achieved by careful design. Note that, even though we make use of the same regularization idea, our framework with nested loops and dynamic refinements also differs from the method presented in [52].

We explain the different steps of NC-NBD in more detail in the following subsections, before we discuss convergence results in Sect. 4. As long as the index \(\ell \) is not needed for the discussions of the inner loop, we omit it for notational convenience. Moreover, we note that several of the considered subproblems require the introduction of additional decision variables, e.g., for piecewise linear approximation or cut projection. For reasons of clarity and comprehensibility, by the terms optimal point or optimal solution we refer to the projection of their actual optimal points to the space \(X_{t-1} \times X_t \times Y_t\), which we are interested in.

3.2 Piecewise linear relaxations

In the outer loop of NC-NBD, all nonlinear functions \(\gamma \in \varGamma \) in problem \((\varvec{P})\) are approximated by some piecewise linear functions. This is achieved by determining a triangulation of their domain, which in our box-constrained setting is always possible. Then, the piecewise linear functions can be defined on the simplices of this triangulation using the function values of \(\gamma \) at their vertices. For a thorough discussion and state-of-the-art approaches to construct piecewise linear approximations and triangulations, see [18, 39, 40].

The piecewise linear approximations can then be reformulated as mixed-integer linear constraints using auxiliary continuous and binary variables. In the literature, several modeling techniques have been proposed, such as the convex combination model, the incremental model and some logarithmic variants [4, 18, 38, 51]. Later on, we draw on refinement and convergence ideas from [9], which work for several of these models, such as the generalized incremental model [9] or the disaggregated logarithmic convex combination model [51].

By shifting the approximations appropriately, it can be ensured that the obtained MILP \((\varvec{\widehat{P}^j})\) is indeed a relaxation of the original problem \((\varvec{P})\) [18]. Alternatively, one can construct piecewise linear underestimators and overestimators, yielding tubes for nonlinear equations [25].

Applying the piecewise linear approximations to problem \((\varvec{P})\), we obtain the MILP outer approximation with copy constraints

$$\begin{aligned} \begin{aligned} (\varvec{\widehat{P}}) \quad \widehat{v} := \min _{\begin{array}{c} x_1, ..., x_T, y_1, ..., y_T \\ z_1, ..., z_T \end{array}} \quad&\sum _{t=1}^T \widehat{f}_t(x_t,y_t) \\ \text {s.t.} \quad&(z_t, x_t, y_t) \in \widehat{M}_t \quad \quad&\forall t=1,\ldots ,T \\&z_t = x_{t-1}&\forall t=1,\ldots ,T. \end{aligned} \end{aligned}$$

For reasons of clarity, we denote the piecewise linear relaxations of \(f_t(\cdot ), g_t(\cdot )\) and \(h_t(\cdot )\) by \(\widehat{f}_t(\cdot ), \widehat{g}_t(\cdot )\) and \(\widehat{h}_t(\cdot )\), although they are modeled using auxiliary constraints and variables. The set \(\widehat{M}_t\) is defined by replacing the functions \(g_t(\cdot )\) and \(h_t(\cdot )\) in \(M_t\) or \(M_t(x_{t-1})\), respectively, with \(\widehat{g}_t(\cdot )\) and \(\widehat{h}_t(\cdot )\).

The dynamic programming equations for \(t=1, \ldots , T\) are given by

$$\begin{aligned} \begin{aligned} (\varvec{\widehat{P}_t(x_{t-1})}) \quad \widehat{Q}_t(x_{t-1}) := \min _{z_t, x_t, y_t} \quad&\widehat{f}_t(x_t,y_t) + \widehat{Q}_{t+1} (x_{t}) \\ \text {s.t.} \quad&(z_t, x_t, y_t) \in \widehat{M}_t \\&z_t = x_{t-1}. \end{aligned} \end{aligned}$$

For the MILP subproblems \((\varvec{\widehat{P}_t}(\cdot ))\), we obtain the following properties.

Lemma 3.1

Under assumption (A2), subproblem \((\varvec{\widehat{P}_t}(\cdot ))\) has complete recourse and the value function \(\widehat{Q}_t(\cdot )\) is l.sc. for all \(t=1,\ldots ,T\).

The complete recourse follows from the complete recourse of \((\varvec{P_t}(\cdot ))\) by construction. The l.sc. then follows from Theorem 3.1 in [31].

3.3 The inner loop

In the inner loop of NC-NBD, the MILP subproblems \((\varvec{\widehat{P}_t(x_{t-1})})\) are considered. As stated before, we omit the index \(\ell \) for its discussion.

The copy constraints are crucial for all problems solved in the inner loop. In the forward pass, to ensure Lipschitz continuity, we consider regularized subproblems. The regularization is based on relaxing and penalizing the copy constraints. In the backward pass, to generate cuts, a special Lagrangian dual subproblem is solved based on dualizing the copy constraints. This is effective, since combined with a binary expansion of the state variables, the copy constraints yield a local convexification [55].

3.3.1 Regularization

Lipschitz continuity of the value functions is difficult to ensure in the general non-convex case. However, as shown recently in [52], for l.sc. value functions, it is possible to determine some underestimating Lipschitz continuous function by enhancing the original subproblem with an appropriate penalty function \(\psi _t\). In contrast to the more general regularization approach in [52], we require only so-called sharp penalty functions \(\psi _t(x_{t-1}) = \Vert x_{t-1} \Vert \) to regularize the subproblems \((\varvec{\widehat{P}_t(x_{t-1})})\), for some norm \(\Vert \cdot \Vert \).

Definition 3.2

(Regularized subproblem and value function) Let \(\sigma _t > 0\) for \(t=2, \ldots T\), \(\sigma _1=0\) and define

$$\begin{aligned} \begin{aligned} (\varvec{\widehat{P}^{R}_t(x_{t-1})}) \quad \widehat{Q}^R_t(x_{t-1}) := \min _{z_t, x_t, y_t} \quad&\widehat{f}_t(x_t,y_t) + \sigma _t \Vert x_{t-1} - z_t \Vert + \widehat{Q}^R_{t+1} (x_{t}) \\ \text {s.t.} \quad&(z_t, x_t, y_t) \in \widehat{M}_t. \\ \end{aligned} \end{aligned}$$

\((\varvec{\widehat{P}_t^R})\) is called regularized subproblem and \(\widehat{Q}_t^R(\cdot )\) regularized value function.

By recursion, this approach yields the regularized optimal value \(\widehat{v}^R := \widehat{Q}^R_1(x_{0})\) for the first stage. Lemma 3.1 implies that under assumption (A2), the function \(\widehat{Q}_t(\cdot )\) is l.sc. Then, the regularized value function \(\widehat{Q}_t^R(\cdot )\) has the following properties.

Lemma 3.3

(Proposition 2 in [52]) For all \(t=1,\ldots ,T\) we have:

  1. (a)

    \(\widehat{Q}^R_t(x_{t-1}) \le \widehat{Q}_t(x_{t-1})\) for all \(x_{t-1} \in X_{t-1}\),

  2. (b)

    Under assumptions (A1) and (A2), the regularized value function \(\widehat{Q}^R_t(\cdot )\) is Lipschitz continuous on \(X_{t-1}\).

As also stated in [52], using sharp penalty functions as in Definition 3.2, the penalization is exact for sufficiently large (but finite) \(\sigma _t > 0\). For such \(\sigma _t\), the problems \((\varvec{\widehat{P}})\) and \((\varvec{\widehat{P}^R})\) have the same optimal points and \(\widehat{v}^R = \widehat{v}\). This result goes back to [14], in which augmented Lagrangian problems are analyzed for MILPs. It is shown that using sharp penalty functions and a sufficiently large augmenting parameter, strong duality holds. As this result holds for any value of the dual multipliers, it is also valid for the regularized subproblems.

Lemma 3.4

(Proposition 8 in [14]) Using sharp penalty functions \(\psi _t\), there exist some \(\bar{\sigma }_t > 0\) such that the penalty reformulation in \((\varvec{\widehat{P}^R_t(x_{t-1})})\) is exact for all \(\sigma _t > \bar{\sigma }_t\).

Lemma 3.4 indicates that using the regularized subproblems within our decomposition method NC-NBD, we obtain convergence to \(\widehat{v}\) in the inner loop. To exploit this, we take the following assumption:

(A3).:

All \(\sigma _t > 0\) are chosen sufficiently large such that Lemma 3.4 is satisfied.

If (A3) is not satisfied, \(\sigma _t\) has to be increased gradually in the course of the NC-NBD method to ensure convergence.

3.3.2 Forward pass

In the forward pass of the inner loop we solve approximations of the regularized subproblems \((\varvec{\widehat{P}_t^R(x_{t-1})})\).

For iteration i, the stage-t forward pass problem is defined as follows

$$\begin{aligned} \begin{aligned}&(\varvec{\widehat{P}_t^{R,i}(\widehat{x}_{t-1}^i, \mathfrak {Q}_{t+1}^{i})}) \\&\underline{\widehat{Q}}^{R,i}_t(\widehat{x}_{t-1}^i, \mathfrak {Q}_{t+1}^{i}) :=&\min _{z_t, x_t, y_t} \quad&\widehat{f}_t(x_t,y_t) + \sigma _t \Vert \widehat{x}_{t-1}^i - z_t \Vert + \mathfrak {Q}^{i}_{t+1} (x_{t}) \\&\text {s.t.} \quad&(z_t, x_t, y_t) \in \widehat{M}_t, \\ \end{aligned} \end{aligned}$$

for the trial state variable \(\widehat{x}_{t-1}^i\), with \(\widehat{x}_0^i \equiv 0\). Function \(\mathfrak {Q}^{i}_{t+1}(\cdot )\), in some sense, approximates the value functions \(\underline{\widehat{Q}}^{R,i}_{t+1}(\cdot , \mathfrak {Q}_{t+2}^{i})\) and \(\underline{\widehat{Q}}^{i}_{t+1}(\cdot , \mathfrak {Q}_{t+2}^{i})\). This approximation is constructed in the backward pass, see Sect. 3.3.4. As those value functions are non-convex, the cut approximation \(\mathfrak {Q}_{t+1}^{i}(\cdot )\) is required to be nonlinear and non-convex. However, as we show later, it can be expressed with mixed-integer linear constraints by lifting the problems to a higher dimension. Therefore, in addition to \(x_t, y_t\) and \(z_t\), the forward pass problem contains further decision variables, which are hidden in \(\mathfrak {Q}_{t+1}^{i}(\cdot )\) and the piecewise linear relaxations \(\widehat{f}_t, \widehat{g}_t\) and \(\widehat{h}_t\).

Note that expressing \(\mathfrak {Q}_{t+1}^i(\cdot )\) by mixed-integer linear constraints with bounded integer variables, the same reasoning as in Lemma 3.1 can be applied to show that \(\underline{\widehat{Q}}^{i}_t(\widehat{x}_{t-1}^i, \mathfrak {Q}_{t+1}^{i})\) is l.sc. and therefore, \(\underline{\widehat{Q}}^{R,i}_t(\widehat{x}_{t-1}^i, \mathfrak {Q}_{t+1}^{i})\) is Lipschitz continuous.

Even with a mixed-integer linear representation of \(\mathfrak {Q}_{t+1}^i(\cdot )\), subproblem \((\varvec{\widehat{P}_t^{R,i}(\widehat{x}_{t-1}^i, \mathfrak {Q}_{t+1}^{i})})\) is a MINLP due to the regularization. For \(\Vert \cdot \Vert _1\) or \(\Vert \cdot \Vert _\infty \), it can be modeled by MILP constraints using standard reformulation techniques for absolute values, though.

The optimal point \((\widehat{z}_t^i, \widehat{x}_t^i, \widehat{y}_t^i)\) of each subproblem \((\varvec{\widehat{P}_t^{R,i}(\widehat{x}_{t-1}^i)})\) is stored and \(\widehat{x}_t^i\) is passed to the following stage. Since \(\big ( (\widehat{z}_t^i, \widehat{x}_t^i, \widehat{y}_t^i) \big )_{t=1,\ldots ,T}\) satisfies all constraints of \((\varvec{\widehat{P}^R})\), after all stages have been considered, an upper bound \(\overline{\widehat{v}}\) on the optimal value \(\widehat{v}^R\) of the regularized problem can be determined by

$$\begin{aligned} \overline{\widehat{v}}^{i} = \min \left\{ \overline{\widehat{v}}^{i-1}, \sum _{t=1}^T \Big ( \widehat{f}_t(\widehat{x}_t^{i}, \widehat{y}_t^{i}) + \sigma _t \Vert \widehat{x}_{t-1}^{i} - \widehat{z}_t^{i} \Vert \Big ) \right\} . \end{aligned}$$

With assumption (A3) and Lemma 3.4, this is also an upper bound to \(\widehat{v}\).

3.3.3 Backward pass–Part 1: binary approximation

The aim of the backward pass of an inner loop iteration i is twofold: Firstly, a lower bound \(\underline{\widehat{v}}^i\) on \(\widehat{v}\) is determined. Secondly, cuts for \(Q_{t}(\cdot )\) are derived to improve and update the current approximation \(\mathfrak {Q}_{t}^{i}(\cdot )\).

As mentioned before, we use a dynamically refined binary approximation of the state variables and then apply cutting-plane techniques from the SDDiP algorithm [55]. This approximation is based on static binary expansion [21].

Binary expansion can be applied component-wise to some vector \(x_t\). Some integer component \(x_{tj} \in \left\{ 0, ..., U_j \right\} \) can be exactly and uniquely expressed as

$$\begin{aligned} x_{tj} = \sum _{k=1}^{K_{tj}} 2^{k-1} \lambda _{tkj} \end{aligned}$$

with variables \(\lambda _{tkj} \in \left\{ 0,1 \right\} \) and \({K_{tj} = \lfloor \log _2 U_j \rfloor + 1}\). Some continuous component \(x_{tj} \in [0, U_j]\) can be expressed by discretizing the interval with precision \(\beta _{t j} \in (0,1)\). We then have

$$\begin{aligned} x_{tj} = \sum _{k=1}^{K_{tj}} 2^{k-1} \beta _{t j} \lambda _{tkj} + r_{tj} \end{aligned}$$

with \(K_{tj} = \lfloor \log _2 \left( \frac{U_j}{\beta _{tj}} \right) \rfloor + 1\) and some error \(r_{tj} \in \left[ -\frac{\beta _{t j}}{2}, \frac{\beta _{t j}}{2} \right] \).

For vector \(x_t\), this yields \(K_t = \sum _{j=1}^{n_t} K_{tj}\) number of binary variables. Defining an \((n_t \times K_t)\)-matrix \(B_t\) containing all the coefficients of the binary expansion and collecting all binary variables in one large vector \(\lambda _{t} \in {\mathbb {B}}^{K_t}\), the binary expansion then can be written compactly as \(x_t = B_t \lambda _{t} + r_t\).

Based on this definition, to generate cuts, for each stage t and iteration i, a binary approximation of \(\widehat{x}_{t-1}^i\) is used, i.e., it is replaced by \(B_{t-1} \lambda _{t-1}^i\). Note that the approximation is not necessarily exact for continuous components of \(\widehat{x}_{t-1}^i\). Therefore, the cuts are not necessarily constructed at the trial point \(\widehat{x}_{t-1}^i\) but at the deviating anchor point \(\widehat{x}_{{\mathbb {B}}, t-1}^i := B_{t-1} \lambda _{t-1}^i\).

In the backward pass, we start from the following subproblem, where due to the binary approximation of the state variables, we also adapt the copy constraint to \(\lambda _{t-1}^i = \mathfrak {z}_t\) with variables \(\mathfrak {z}_t \in [0,1]^{K_{t-1}}\).

$$\begin{aligned} \begin{aligned} (\varvec{\widehat{P}_{{\mathbb {B}}t}^{i}(\lambda _{t-1}^i, \mathfrak {Q}_{t+1}^{i+1})}) \quad&\underline{\widehat{Q}}^{i}_{{\mathbb {B}}t}(\lambda _{t-1}^i, \mathfrak {Q}_{t+1}^{i+1}) :=&\min _{\begin{array}{c} x_t, y_t, \\ \mathfrak {z}_t, z_t \end{array}} \quad&\widehat{f}_t(x_t,y_t) + \mathfrak {Q}^{i+1}_{t+1} (x_{t}) \\&\text {s.t.} \quad&(z_t, x_t, y_t) \in \widehat{M}_t \\&z_t = B_{t-1} \mathfrak {z}_t \\&\mathfrak {z}_t \in [0,1]^{K_{t-1}} \\&\mathfrak {z}_t = \lambda _{t-1}^i. \end{aligned} \end{aligned}$$

Remark 3.5

Subproblem \((\varvec{\widehat{P}_{{\mathbb {B}}t}^{i}(\lambda _{t-1}^i, \mathfrak {Q}_{t+1}^{i+1})})\) is equivalent to subproblem \((\varvec{\widehat{P}_{t}^{i}(\widehat{x}_{{\mathbb {B}}, t-1}^i, \mathfrak {Q}_{t+1}^{i+1})})\) because \(z_t = B_{t-1} \mathfrak {z}_t = B_{t-1} \lambda _{t-1}^i = \widehat{x}_{{\mathbb {B}}, t-1}^i\).

Asymptotically, i.e., for an infinitely fine binary approximation, the anchor point converges to the actual trial point.

Lemma 3.6

We have \(\lim _{\beta _{t-1} \rightarrow 0} \widehat{x}_{{\mathbb {B}}t-1}^i = \widehat{x}_{t-1}^i\).

With Lemma 3.6, asymptotically, the cuts are constructed at \(\widehat{x}_{t-1}^i\). While this is not directly useful in practice, since it requires an infinite number of binary variables, it also implies that for componentwise sufficiently small \(\beta _{t-1} \in (0,1)\), the cuts are constructed very close to \(\widehat{x}_{t-1}^i\). As NC-NBD constructs Lipschitz continuous cuts, this guarantees a sufficiently good approximation of the value function at \(\widehat{x}_{t-1}^i\), as we show in Sect. 4.

Importantly, in our framework the binary approximation is only applied temporarily to derive cuts, while the state variables \(x_{t-1}\) in the forward pass remain continuous. In other words, the anchor points determine where cuts can be constructed, but do not limit where they can be evaluated. This is a crucial difference to applying a static binary expansion, as suggested in the original SDDiP work to solve MILPs with continuous state variables [55].

Moreover, let us emphasize again that applying such static approximation is not appropriate in our inner loop, as the obtained lower bounds are not guaranteed to be valid for \(\widehat{v}\) or v. Similarly, the obtained cuts are not guaranteed to be valid for \(\widehat{Q}_t(\cdot )\) or \(Q_t(\cdot )\), and therefore cannot be re-used within the outer loop. Our proposed inner loop method does not share these issues. We follow a dynamic approach where the binary precision is dynamically refined if required and, as we show later, all cuts remain valid with later refinements.

3.3.4 Backward pass–Part 2: cut generation

As proposed in [55], the copy constraint is dualized to generate cuts. Applied to our context, the following Lagrangian dual subproblem has to be solved

$$\begin{aligned} \begin{aligned} (\varvec{D_{{\mathbb {B}}t}^{i}(\lambda _{t-1}^i, \mathfrak {Q}_{t+1}^{i+1})}) \quad&\max _{\Vert \pi _t \Vert _* \le l_t} \quad \mathcal {L}_{{\mathbb {B}}t}^i(\pi _t, \mathfrak {Q}_{t+1}^{i+1}) + \pi _t^\top \lambda _{t-1}^i, \end{aligned} \end{aligned}$$

where \(\mathcal {L}_{{\mathbb {B}}t}^i(\cdot )\) denotes the Lagrangian function for \(\pi _t\) defined by

$$\begin{aligned} \begin{aligned} \mathcal {L}_{{\mathbb {B}}t}^i(\pi _t^i, \mathfrak {Q}_{t+1}^{i+1}) := \min _{x_t, y_t, \mathfrak {z}_t, z_t} \quad&\widehat{f}_t(x_t,y_t) + \mathfrak {Q}_{t+1}^{i+1}(x_{t}) - \pi _t^\top \mathfrak {z}_{t} \\ \text {s.t.} \quad&(z_t, x_t, y_t) \in \widehat{M}_t \\&z_{t} = B_{t-1} \mathfrak {z}_t \\&\mathfrak {z}_t \in [0,1]^{K_{t-1}} \end{aligned} \end{aligned}$$

and \(\Vert \cdot \Vert _*\) denotes the dual norm to the norm used in the regularized forward pass problems \((\varvec{\widehat{P}_t^{R,i}(\widehat{x}_{t-1}^i, \mathfrak {Q}_{t+1}^{i})})\).

A linear (optimality) cut in binary space \(\{0,1\}^{K_{t-1}}\) is then given by

$$\begin{aligned} \phi _{{\mathbb {B}}t}(\lambda _{t-1}) := \underbrace{\mathcal {L}_{{\mathbb {B}}t}^i(\pi _t^i, \mathfrak {Q}_{t+1}^{i+1})}_{=: c_t^i} + (\pi _t^i)^\top \lambda _{t-1}, \end{aligned}$$
(1)

where \(\pi _t^i\) is an optimal solution of the Lagrangain dual subproblem \((\varvec{D_{{\mathbb {B}}t}^i(\lambda _{t-1}^i,\mathfrak {Q}_{t+1}^{i+1})})\). Those Lagrangian cuts are introduced in [55] and identified to be finite, valid and tight in the SDDiP setting. In our setting, we obtain the following validity result.

Lemma 3.7

Let \(\widehat{Q}_{{\mathbb {B}}t}(\cdot )\) denote the MILP value function of stage t with additional binary approximations. Then,

  1. (a)

    for all \(\lambda _{t-1} \in [0,1]^{K_{t-1}}\)

    $$\begin{aligned} \widehat{Q}_{{\mathbb {B}}t}(\lambda _{t-1}) \ge \phi _{{\mathbb {B}}t}(\lambda _{t-1}), \end{aligned}$$
  2. (b)

    for all \(x_{t-1}\)

    $$\begin{aligned} \widehat{Q}_{t}(x_{t-1}) \ge \phi _{{\mathbb {B}}t}(\lambda _{t-1}) \end{aligned}$$

    for any \(\lambda _{t-1} \in [0,1]^{K_{t-1}}\), such that \(x_{t-1} = B_{t-1} \lambda _{t-1}\).

Lemma 3.7 a) follows directly from the validity proof for the SDDiP cuts, which does also hold for \(\lambda _{t-1} \in [ 0, 1 ]^{K_{t-1}}\) instead of \(\lambda _{t-1} \in \left\{ 0, 1 \right\} ^{K_{t-1}}\) (see Theorem 3 in [55]). Part b) then follows using similar arguments as in Remark 3.5. Hence, \(\phi _{{\mathbb {B}}t}\) is, in fact, a valid cut in \([0,1]^{K_{t-1}}\). This enables us to obtain valid under-approximations also for those points, which are not exactly approximated by the current binary expansion. As it refers to an outer approximation, \(\widehat{Q}_t(\cdot )\) underestimates the original MINLP value function \(Q_t(\cdot )\). Thus, the obtained cuts are valid for \(Q_t(\cdot )\) as well.

Contrary to [55], but following [52], we bound the dual variable \(\pi _t\) in the Lagrangian dual subproblem. Therefore, tightness for \(\underline{\widehat{Q}}^i_{{\mathbb {B}}t}(\cdot , \mathfrak {Q}_{t+1}^{i+1})\) is not guaranteed. However, the cuts are at least guaranteed to overestimate the value function \(\underline{\widehat{Q}}^{R,i}_{{\mathbb {B}}t}(\cdot , \mathfrak {Q}_{t+1}^{i+1})\) at \(\lambda _{t-1}^i\). This value function is obtained by regularizing \(\underline{\widehat{Q}}^i_{{\mathbb {B}}t}(\cdot , \mathfrak {Q}_{t+1}^{i+1})\) in the binary space using the same norm as in the forward pass problem. By careful choice of the regularization factor, then, also the regularized value function \(\underline{\widehat{Q}}^{R,i}_{t}(\cdot , \mathfrak {Q}_{t+1}^{i+1})\) in the original space is overestimated at \(x_{{\mathbb {B}}, t-1}^i\). This result is formalized in the following lemma.

Lemma 3.8

Assume that we use \(\Vert \cdot \Vert _1\) for regularization and its dual norm \(\Vert \cdot \Vert _\infty \) for bounding the dual multipliers. Then, as long as \(l_t \ge \sigma _t \Vert B_{t-1} \Vert \), where the latter denotes the induced matrix norm of \(B_{t-1}\), we have

$$\begin{aligned} \phi _{{\mathbb {B}}t} (\lambda _{t-1}^i) \ge \underline{\widehat{Q}}^{R,i}_{{\mathbb {B}}t}(\lambda _{t-1}^i, \mathfrak {Q}_{t+1}^{i+1}) \ge \underline{\widehat{Q}}^{R,i}_{t}(x_{{\mathbb {B}}, t-1}^i, \mathfrak {Q}_{t+1}^{i+1}). \end{aligned}$$

Proof

See Appendix A. \(\square \)

Remark 3.9

The induced matrix norm \(\Vert B_{t-1} \Vert \) depends on the chosen precision of the binary approximation. It can be bounded from above independent of the precision, e.g., \(\Vert B_{t-1} \Vert _1 \le U_{t-1, \max }\) with \(U_{t-1, \max }\) the largest component of the upper bounds in \(X_{t-1}\).

3.3.5 Backward pass–Part 3: cut projection

Solving the forward pass problems \((\varvec{\widehat{P}_t^{R,i}(\widehat{x}_{t-1}^i, \mathfrak {Q}_{t+1}^{i})})\) and the backward pass dual problems \((\varvec{D_{{\mathbb {B}}t}^{i}(\lambda _{t-1}^i, \mathfrak {Q}_{t+1}^{i+1})})\) requires expressing the cut approximation \(\mathfrak {Q}_{t+1}^i(\cdot )\) in the original state variables \(x_t\). Recall that the computed cut \(\phi _{{\mathbb {B}},t+1}(\cdot )\) is a function of \([ 0,1 ]^{K_{t}}\).

According to Lemma 3.7 a), the obtained cuts \(\phi _{{\mathbb {B}}, t+1} (\cdot )\) are not only valid for all binary points, but for all values in \([0,1]^{K_{t}}\). Allowing for \(\lambda _t \in [0,1]^{K_t}\) in the binary approximation, there exist infinitely many combinations of \(\lambda _t\) to exactly describe some point \(x_t \in X_t\), though. Therefore, following from Lemma 3.7 b), one cut in binary space entails infinitely many underestimators of \(Q_{t+1}(\cdot )\) at \(x_{t}\) in the original space \(X_t\). Including infinitely many inequalities in \(\mathfrak {Q}_{t+1}(\cdot )\) is computationally infeasible. Instead, we consider the pointwise maximum of the projection of the cuts to \(X_t\). That way, only the best underestimation for each point \(x_t\) is taken into account. In doing so, we obtain a nonlinear, i.e., piecewise linear, cut in the original state space. For simplicity, in the following, by cut projection we always mean the pointwise maximum of the actual projection.

The projection of some cut \(\phi _{{\mathbb {B}}, t+1}(\cdot )\) to \(X_t\) can be described as the value function

$$\begin{aligned} \begin{aligned} \phi _{t+1} (x_{t}) := \max _{\lambda _t} \left\{ c_{t+1} + (\pi _{t+1})^\top \lambda _t: B_t \lambda _t = x_t, \lambda _t \le e, \lambda _t \ge 0 \right\} \end{aligned} \end{aligned}$$
(2)

of a linear program where e denotes a vector of ones of dimension \(K_t\). The dual problem to (2) yields

$$\begin{aligned} \begin{aligned} \phi ^{D}_{t+1} (x_{t}) := \min _{\eta _t, \mu _t} \left\{ c_{t+1} + x_t^\top \eta _{t} + e^\top \mu _{t} \ : \ B_t^\top \eta _{t} + I \mu _{t} \ge \pi _{t+1}, \mu _{t} \ge 0 \right\} . \end{aligned} \end{aligned}$$
(3)

Note that the dual feasible region does not depend on \(x_t\) and has a finite number of extreme points. Therefore, the cut projection is piecewise linear and concave.

As problem (2) is feasible and bounded for any \(x_t \in X_t\), this also holds for the dual problem (3). Therefore, in a dual optimal solution, \(\eta _t\) and \(\mu _t\) are bounded. Note that this bound may change with the binary approximation precision \(\beta _t\), though, and that, if we would generate tight cuts for \(\underline{\widehat{Q}}_{t+1}^i(\cdot , \mathfrak {Q}_{t+2}^{i+1})\), those cuts may become infinitely steep close to discontinuities. However, as we can bound \(\pi _t\) in the Lagrangian dual subproblem independent of \(\beta _t\), see Remark 3.9, and thus construct cuts which at least overestimate the regularized value function \(\underline{\widehat{Q}}^{R,i}_{t+1}(\cdot , \mathfrak {Q}_{t+2}^{i+1})\) at the anchor point \(x_{{\mathbb {B}}, t}^i\), such cases should be ruled out.

We formalize this by assuming the existence of a global bound for \(\eta _t\).

(A4).:

There exists some \(\rho _t > 0\), such that for all \(t=1,\ldots ,T\), any binary precision \(\beta _t\) and any \(x_t\), the optimal dual variable \(\eta _t\) in problem (3) can be bounded, i.e., \(\Vert \eta _t \Vert \le \rho _t\).

For example, if we obtain cuts which are, in fact, tight for \(\underline{\widehat{Q}}^{R,i}_{t+1}(\cdot , \mathfrak {Q}_{t+2}^{i+1})\) at \(x_{{\mathbb {B}}, t}^i\) and consider only basic solutions in the Lagrangian dual, the gradient of the cuts is bounded by \(\sigma _{t+1}\). With Assumption (A4) it follows that the linear cuts \(\phi _{{\mathbb {B}}, t+1}(\cdot )\) derived in the binary space yield a nonlinear, but Lipschitz continuous projection \(\phi _{t+1}(\cdot )\) in the original space.

To express this projection by mixed-integer linear constraints, we use the KKT conditions to problems (2) and (3). To emphasize that these conditions are considered for the projection of one specific cut r (the index denoting the r-th cut constructed), we index all occurring variables and coefficients by r.

$$\begin{aligned} -\pi _{t+1}^r - \nu _t^r + \mu _t^r + (B_t^r)^\top \eta _t^r&= 0 \end{aligned}$$
(4)
$$\begin{aligned} B_t^r \lambda _t^r - x_t&= 0 \end{aligned}$$
(5)
$$\begin{aligned} \lambda _t^r&\ge 0 \end{aligned}$$
(6)
$$\begin{aligned} \lambda _t^r - e&\le 0 \end{aligned}$$
(7)
$$\begin{aligned} \nu _t^r, \mu _t^r&\ge 0 \end{aligned}$$
(8)
$$\begin{aligned} - (\nu _t^r)^\top \lambda _t^r&= 0 \end{aligned}$$
(9)
$$\begin{aligned} (\mu _t^r)^\top (\lambda _t^r -e)&= 0 . \end{aligned}$$
(10)

The complementary slackness constraints (9) and (10) are nonlinear, but componentwise can be expressed linearly using a Big-\(\mathcal {M}\) formulation (alternatively, SOS-1 constraints may be used):

$$\begin{aligned}&\lambda _{tk}^r \le \mathcal {M}_{1k} \omega _{tk}^r, \quad \nu _{tk}^r \le \mathcal {M}_{2k} (1- \omega _{tk}^r), \quad \omega _{tk}^r \in \left\{ 0, 1 \right\} \end{aligned}$$
(11)
$$\begin{aligned}&\lambda _{tk}^r-1 \ge \mathcal {M}_{3k} u_{tk}^r, \quad \mu _{tk}^r \le \mathcal {M}_{4k} (1- u_{tk}^r), \quad u_{tk}^r \in \left\{ 0, 1 \right\} \end{aligned}$$
(12)

For all components k, we can choose \(\mathcal {M}_{1k} = 1\) and \(\mathcal {M}_{3k} = -1\) due to \(\lambda _{tk} \in [0,1]\). Moreover, using (A4), we are able to obtain explicit choices for \(\mathcal {M}_{2k}\) and \(\mathcal {M}_{4k}\) as well.

Lemma 3.10

Under (A4), there exist explicit, finite bounds for \(\nu _{tk}^r\) and \(\mu _{tk}^r\).

Proof

See Appendix B. \(\square \)

The cut approximation \(\mathfrak {Q}_{t+1}^{i+1}(\cdot )\) is then defined as the maximum of all cuts \(\phi _{{\mathbb {B}}, t+1}^r = c_{t+1}^r + (\pi _{t+1}^r)^\top \lambda _t^r\) where the variable \(\lambda _t^r\) satisfies the linearized KKT conditions (4)–(8) and (11)–(12) for the r-th cut. With Assumption (A4), it is Lipschitz continuous.

Lemma 3.11

The cut approximation \(\mathfrak {Q}_{t+1}(\cdot )\) is Lipschitz continuous in \(X_t\) with Lipschitz constant \(\rho _t\).

The cut projection requires to introduce the variables \(\lambda _t^r, \nu _t^r, \mu _t^r, w_t^r, u_t^r, \eta _t^r\) and constraints (4)–(8) and (11)–(12) for each cut r. In particular, each cut is associated with a variable \(\lambda _t^r \in [0,1]^{K_t^r}\) where \(K_t^r\) corresponds to the number of binary variables at the time of the cut’s generation. This increases the problem size considerably, as the number of variables and constraints to be added per cut is in \(\mathcal {O} \big (n_t \log \big ( \frac{1}{\beta _t} \big ) \big )\). In return, it ensures that cuts do not have to be generated from scratch after each refinement.

3.3.6 Stopping and refining

At the end of the backward pass, a lower bound \(\underline{\widehat{v}}^{i}\) is determined by solving the first-stage subproblem \((\varvec{\widehat{P}_1^{i}(0, \mathfrak {Q}_{2}^{i+1})})\). Here, no Lagrangian dual is solved, since no cuts have to be derived. The lower bound is non-decreasing because the cut approximation is only improved.

If the updated bounds are sufficiently close to each other, i.e., if

$$\begin{aligned} \overline{\widehat{v}}^{i} - \underline{\widehat{v}}^{i} \le \widehat{\varepsilon } \end{aligned}$$

for some predefined tolerance \(\widehat{\varepsilon } > 0\), an approximately optimal point of problem \((\varvec{\widehat{P}})\) has been determined. We show in the following section that this is the case after finitely many iterations i.

If the gap between the bounds does not meet the stopping criteria yet, two cases are possible: In the first case, the algorithm has not determined the best possible approximation for the given binary approximation precision, yet. New cuts have been determined in iteration i such that the lower bound \(\underline{\widehat{v}}^i\) has been updated, and the forward solution will change in iteration \(i+1\) as the previous one is cut away.

In the second case, despite not meeting the stopping criterion, the forward solution does not change at the beginning of iteration \(i+1\). This case is related to the binary approximation. It can occur if the binary approximation is too coarse and therefore, for all t, the determined cuts at \(\widehat{x}_{{\mathbb {B}}t}^i\) do not improve the approximation at \(\widehat{x}_t^i\). Moreover, it can occur if in subsequent iterations the same cuts are constructed, since \(\widehat{x}_{B, t-1}^i = \widehat{x}_{B, t-1}^{i+1}\). Finally, it can also occur if all possible cuts have been generated: For a fixed binary approximation, there exist only finitely many points \(\widehat{x}_{{\mathbb {B}}t}\). If we restrict the Lagrangian dual subproblem to basic solutions, then only finitely many different cuts can be determined [55].

In the second case, at the beginning of the backward pass of iteration i, the binary approximation is refined. The refinement is computed by increasing \(K_{tj}\) by +1 for all components j and all stages t with

$$\begin{aligned} \beta _{tj} = \frac{U_j}{\sum _{k=1}^{K_{t_j}} 2^{k - 1}}. \end{aligned}$$

For simplicity, we refine in Algorithm 1 all stages and components equally by +1. Note that each refinement requires the introduction of an additional vector \(\lambda _t\), as described in the previous subsection.

As all previously generated cuts have been projected to the original space \(X_t\), they remain valid and have not to be recomputed when refining the binary approximation. This is computationally important.

3.4 The outer loop

3.4.1 The outer loop problem

Once the inner loop is left, we set \(\underline{\widehat{v}}^{\ell } := \underline{\widehat{v}}^{\ell i}, \overline{\widehat{v}}^{\ell } := \overline{\widehat{v}}^{\ell i}\) and \(\mathfrak {Q}_t^{\ell }(\cdot ) := \mathfrak {Q}_t^{\ell i}(\cdot )\) for all \(t=2,...,T\). Note that \(\overline{\widehat{v}}^{\ell }\) is not guaranteed to be a valid upper bound for v because \(\widehat{v}^\ell \le v\). Moreover, we set \(\big (( \widehat{z}_t^{\ell }, \widehat{x}_t^{\ell }, \widehat{y}_t^{\ell } ) \big )_{t = 1,...,T} := \big (( \widehat{z}_t^{\ell i}, \widehat{x}_t^{\ell i}, \widehat{y}_t^{\ell i} ) \big )_{t = 1,...,T}\).

To approximate the optimal value v of \((\varvec{P})\), we solve subproblems

$$\begin{aligned} \begin{aligned} (\varvec{P_t^{\ell }(x_{t-1}^{\ell }, \mathfrak {Q}_{t+1}^{\ell })}) \quad&\underline{Q}^{\ell }_t(x_{t-1}^{\ell }, \mathfrak {Q}_{t+1}^{\ell }) :=&\min _{z_t, x_t, y_t} \quad&f_t(x_t,y_t) + \mathfrak {Q}^{\ell }_{t+1} (x_{t}) \\&\text {s.t.} \quad&(z_t, x_t, y_t) \in M_t \\&z_t = x_{t-1}^{\ell } \end{aligned} \end{aligned}$$

in a forward manner for \(t=1, \ldots , T\) with \(x_0^{\ell } \equiv 0\) and \(x_{t}^\ell := x_t\), where \(x_t\) is an optimal solution of \((\varvec{P_t^{\ell }(x_{t-1}^{\ell }, \mathfrak {Q}_{t+1}^{\ell })})\) for t. Here, we exploit that the cut approximation \(\mathfrak {Q}_t^{\ell }(\cdot )\), constructed in the inner loop, is valid for \(Q_t(\cdot )\) by design as well. By solving these subproblems, we obtain a feasible solution \(\big ( (z_t^{\ell }, x_t^{\ell }, y_t^{\ell }) \big )_{t=1,\ldots ,T}\) for \((\varvec{P})\) and we can determine a valid upper bound for v as \(\overline{v}^{\ell } = \min \left\{ \overline{v}^{\ell -1}, \sum _{t=1}^T f_t(x_t^{\ell }, y_t^{\ell }) \right\} \).

The subproblems \((\varvec{P_t^{\ell }(x_{t-1}^{\ell }, \mathfrak {Q}_{t+1}^{\ell })})\) are non-convex MINLP problems. This means that in order to solve the original non-convex problem \((\varvec{P})\), easier, but still non-convex subproblems have to be solved to optimality for each stage t in each outer loop iteration \(\ell \). This might be a hard challenge by itself. We make the following assumption for the remainder of this article:

(A5).:

An oracle exists that is able to solve subproblems \((\varvec{P_t^{\ell }(x_{t-1}^{\ell }, \mathfrak {Q}_{t+1}^{\ell })})\) to global optimality.

In case that no such global optimization algorithm is available, one can solve appropriate inner approximations of \((\varvec{P_t^{\ell }(x_{t-1}^{\ell }, \mathfrak {Q}_{t+1}^{\ell })})\), which are improved in the course of the algorithm.

If \(\overline{v}^{\ell } - \underline{\widehat{v}}^{\ell } \le \varepsilon \), then NC-NBD terminates and \(\big ( (z_t^{\ell }, x_t^{\ell }, y_t^{\ell }) \big )_{t=1,\ldots ,T}\) is an \(\varepsilon \)-optimal solution for \((\varvec{P})\). Otherwise, the cut approximations \(\mathfrak {Q}_{t+1}^{\ell }(\cdot )\) are not sufficiently good underestimators for the true value functions, even though they give a good approximation of \(\widehat{Q}_t^{\ell }(\cdot )\). This implies that the piecewise linear relaxations have to be improved. Instead of refining them everywhere, they are refined dynamically where it is promising, i.e., in a neighborhood of the approximate optimal solution \(\big ((\widehat{z}_t^{\ell }, \widehat{x}_t^{\ell }, \widehat{y}_t^{\ell }) \big )_{t=1,\ldots ,T}\) of \((\varvec{\widehat{P}^{\ell }})\). In refining the piecewise linear relaxations in its neighborhood, the current solution can be excluded in the next inner loop and the lower bound \(\underline{\widehat{v}}^{\ell }\) improves.

Remark 3.12

Instead of \(\underline{\widehat{v}}^{\ell }\), an even better lower bound for v is given by the optimal value of the first stage subproblem \((\varvec{P_1^{\ell }(x_0^{\ell }, \mathfrak {Q}_2^{\ell })})\).

3.4.2 Refining the piecewise linear relaxations

The refinement consists of two steps: (1) the piecewise linear approximations are refined and (2) the corresponding MILP \((\varvec{\widehat{P}^{\ell }})\) is updated – in such a way that the new MILP \((\varvec{\widehat{P}^{\ell +1}})\) again yields a relaxation of \((\varvec{P})\).

Different strategies are possible to achieve this. For a thorough overview, we refer to [18]. In the following, we make use of a specific adaptive refinement scheme for triangulations from [9] for any nonlinear function \(\gamma _t \in \varGamma _t\). The given piecewise linear approximation at iteration \(\ell \) is defined by a triangulation \(\mathcal {T}\) of \(X_{t-1} \times X_t \times Y_t\) (or a subspace) and the corresponding function values of \(\gamma _t\). Instead of refining this triangulation everywhere now, the main idea is to only refine it in a neighborhood of \(\big ((\widehat{z}_t^{\ell }, \widehat{x}_t^{\ell }, \widehat{y}_t^{\ell }) \big )_{t=1,\ldots ,T}\). Therefore, first, the simplex in \(\mathcal {T}\) containing this point is identified. It is then divided by a longest-edge bisection, yielding a refined triangulation, for which a new MILP model can be set up. As proven in [9], this refinement strategy has some favorable properties with respect to convergence, see Sect. 4.2.

It is important that the obtained relaxation \((\varvec{\widehat{P}^{\ell +1}})\) is tighter than \((\varvec{\widehat{P}^{\ell }})\) so that the corresponding value functions improve monotonically. This is required to ensure that previously generated cuts remain valid in later iterations. For concave functions, this is always satisfied using the presented refinement strategy. For other functions, e.g., convex ones, a more careful determination of the relaxation is required or the MILP models for earlier relaxations have to be kept instead of being replaced. For our theoretical results, it is sufficient that such monotonically improving relaxations can always be determined.

After refining the piecewise linear relaxations, a new iteration \(\ell +1\) is started, beginning with the inner loop.

4 Convergence results

In this section, we prove the convergence of the NC-NBD algorithm. We start proving the convergence of the inner loop to an optimal solution of \((\varvec{\widehat{P}^{\ell }})\) based on some results on the binary refinements. Afterwards, we prove that the outer loop converges to an optimal solution of the original problem \((\varvec{P})\).

4.1 Convergence of the inner loop

As explained in Sect. 3.3.3, within NC-NBD the cuts are not generated at the trial points \(\widehat{x}_{t-1}^i\), but instead at anchor points \(\widehat{x}_{{\mathbb {B}}, t-1}^i := B_{t-1} \lambda _{t-1}^i\). This means that the generated cuts, and with that also the cut approximations \(\mathfrak {Q}_t(\cdot )\), implicitly depend on the binary approximation precision \(\beta _t\).

However, Lemma 3.6 implies that \(\widehat{x}_{t-1}^i\) and \(\widehat{x}_{{\mathbb {B}}, t-1}^i\) should become equal asymptotically in the refinements of the binary approximations. Therefore, asymptotically, the cuts are guaranteed to overestimate \(\underline{\widehat{Q}}^{R,i}_{t}(\widehat{x}_{t-1}^i, \mathfrak {Q}_{t+1}^{i+1})\) and, due to their Lipschitz continuity, for some sufficiently small precision, they are at least \(\varepsilon _{{\mathbb {B}}t}\)-close. This, in turn, leads to convergence of the inner loop, as we formalize and prove below.

Prior to this, let us introduce two useful ideas. Firstly, using the Lipschitz continuity results from Lemma 3.3, page 12 and Lemma 3.11, we can bound the cut approximation error in \(\widehat{x}_{t-1}^i\) as follows:

Lemma 4.1

With Assumption (A4), for any iteration i and stage t it follows

$$\begin{aligned} \begin{aligned} \mathfrak {Q}_t^{i+1}(\widehat{x}_{t-1}^i) - \underline{Q}_t^{R,i}(\widehat{x}_{t-1}^i, \mathfrak {Q}_{t+1}^{i+1}) \ge - (L^R_t + \rho _t) \Vert \widehat{x}_{t-1}^i - \widehat{x}_{{\mathbb {B}}, t-1}^i \Vert . \end{aligned} \end{aligned}$$

Proof

See Appendix C. \(\square \)

Secondly, for any stage t and any fixed binary approximation, if we restrict to basic solutions in the Lagrangian duals, only finitely many different realizations of cut approximations \(\mathfrak {Q}_t(\cdot )\) can be generated. Thus, after a finite number of iterations, the binary approximation is refined. Assuming that the inner loop does not terminate for \(\widehat{\varepsilon } = 0\), we can then observe infinitely many such refinements. Hence, with \(j \rightarrow \infty \), we also get \(\beta _t \rightarrow 0\) for all \(t=1,\ldots ,T\).

Now, we address convergence of the inner loop of NC-NBD to an optimal solution of \((\varvec{\widehat{P}})\). First, we provide a preliminary result, which can be proven by backward induction using Lemmas 3.11 and 4.1.

Lemma 4.2

Suppose that the inner loop does not terminate for \(\widehat{\varepsilon } = 0\). Then, the infinite sequence of forward pass trial solutions \(( \widehat{x}^i )_{i \in {\mathbb {N}}}\) possesses some cluster point \(\widehat{x}^*\) with a corresponding convergent subsequence \(( \widehat{x}^{i_j} )_{j \in {\mathbb {N}}}\). This subsequence satisfies

$$\begin{aligned} \lim _{j \rightarrow \infty } \mathfrak {Q}_t^{i_j}(\widehat{x}_{t-1}^{i_j}) \ge \widehat{Q}_t^R(\widehat{x}_{t-1}^{*}). \end{aligned}$$
(13)

Proof

See Appendix D. \(\square \)

Using this result, convergence can be proven.

Theorem 4.3

Suppose that the inner loop does not terminate for \(\widehat{\varepsilon } = 0\). Then, the sequence \(( \underline{\widehat{v}}^i )_{i \in {\mathbb {N}}}\) of lower bounds determined by the algorithm converges to \(\widehat{v}\) and every cluster point of the sequence of feasible forward pass solutions generated by the inner loop is an optimal solution of \((\varvec{\widehat{P}})\).

Note that with a similar argument it can be shown that the inner loop terminates as soon as \(\mathfrak {Q}_t^i(\widehat{x}_{t-1}^i) \ge \widehat{Q}_t^R(\widehat{x}_{t-1}^{i})\) for all \(t=2,...,T\).

Considering that the inner loop is integrated into an outer loop improving the MILP approximations of \((\varvec{P})\), infinite convergence is not directly useful. Moreover, infinitely many binary refinements are not computationally feasible. However, we can deduce that an approximately optimal solution of \((\varvec{\widehat{P}})\) is determined in a finite number of iterations.

Corollary 4.4

For any stopping tolerance \(\widehat{\varepsilon } > 0\), the inner loop stops in a finite number of iterations with an \(\widehat{\varepsilon }\)-optimal solution of \((\varvec{\widehat{P}})\).

4.2 Convergence of the outer loop

We start our convergence analysis of the outer loop with a feasibility result for the solutions determined in the inner loop, which follows from the convergence results in [9]. The main idea is that, as the domain is bounded for all functions \(\gamma \in \varGamma \), using a longest-edge bisection, after a finite number of steps, all considered simplices become sufficiently small (since in the worst case all simplices have been refined sufficiently often).

Lemma 4.5

([9]) Using longest-edge bisection for the piecewise linear relaxation refinements within NC-NBD yields optimal solutions \(\big ((\widehat{z}_t^{\ell }, \widehat{x}_t^{\ell }, \widehat{y}_t^{\ell }) \big )_{t=1,\ldots ,T}\) for \((\varvec{\widehat{P}^{\ell }})\) in the inner loop, which

  1. (a)

    are approximately feasible for \((\varvec{P})\) after a finite number of steps \(\ell \),

  2. (b)

    become feasible for \((\varvec{P})\) asymptotically in the number of refinements \(\ell \).

Next we show that with decreasing the feasibility error also the deviation in the optimal value between \((\varvec{\widehat{P}^{\ell }})\) and \((\varvec{P})\) can be controlled.

As a preliminary result, we obtain that for sufficiently small feasibility tolerances \(\widehat{\varepsilon }_{\gamma }\) for all \(\gamma \in \varGamma \), there exists a neighborhood of the optimal solution \(\widehat{\mathrm {x}}^{\ell } := \big ((\widehat{z}_t^\ell , \widehat{x}_t^\ell , \widehat{y}_t^\ell ) \big )_{t=1,\ldots ,T}\) of problem \((\varvec{\widehat{P}^{\ell }})\) containing a feasible point \(\widetilde{\mathrm {x}}^{\ell } := \big ((\widetilde{z}_t^\ell , \widetilde{x}_t^\ell , \widetilde{y}_t^\ell ) \big )_{t=1,\ldots ,T}\) of \((\varvec{P})\). This follows primarily from the continuity of all functions in \((\varvec{P})\).

Lemma 4.6

For any \(\delta > 0\), there exists some \(\hat{\ell } \in {\mathbb {N}}\) such that for all \(\ell \ge \hat{\ell }\) there exists some feasible point \(\widetilde{\mathrm {x}}^{\ell }\) of \((\varvec{P})\) with

\(\Vert \widetilde{\mathrm {x}}^{\ell } - \widehat{\mathrm {x}}^{\ell } \Vert _2 \le \delta \).

Applying Lemma 4.6 yields the following result with respect to the deviation in the optimal value between \((\varvec{\widehat{P}^{\ell }})\) and \((\varvec{P})\).

Theorem 4.7

There exists some such that for all we have

$$\begin{aligned} 0 \le v - \widehat{v}^{\ell } \le \varepsilon . \end{aligned}$$

Proof

The proof makes use of the Lipschitz continuity of \(f_t\), Lemma 4.5 and Lemma 4.6 to bound \(v - \widehat{v}^\ell \) from above by \(L_f \delta + \sum _{t=1}^T \widehat{\varepsilon }_{f_t}\) (with \(\widehat{\varepsilon }_{f_t}\) deduced from \(\widehat{\varepsilon }_\gamma \) with \(\gamma = f_t\)). The assertion then follows with \(\varepsilon := L_{f} \delta + \sum _{t=1}^T \widehat{\varepsilon }_{f_t}\). For a detailed proof see Appendix F. \(\square \)

We obtain the central convergence result for NC-NBD:

Theorem 4.8

NC-NBD has the following convergence properties:

  1. (a)

    Assume that for all \(\ell \) the MILP \((\varvec{\widehat{P}^\ell })\) is solved to global optimality in a finite number of steps. Then, if NC-NBD does not terminate with \(\varepsilon = 0\), the sequence of lower bounds \((\underline{\widehat{v}}^\ell )_{\ell \in {\mathbb {N}}}\) converges to v and the outer loop solutions converge to an optimal solution of \((\varvec{P})\).

  2. (b)

    Let \(\varepsilon = \widehat{\varepsilon } > 0\). Then, if NC-NBD does not terminate, it converges to an \(\widehat{\varepsilon }\)-optimal solution of \((\varvec{P})\).

  3. (c)

    For any \(\varepsilon> \widehat{\varepsilon } > 0\), NC-NBD terminates with an \(\varepsilon \)-optimal solution of \((\varvec{P})\) after a finite number of steps.

Proof

See Appendix G. \(\square \)

5 Computational results

We illustrate the adequacy of using the NC-NBD to solve multistage non-convex MINLPs by applying it to moderate-sized instances of a unit commitment problem. NC-NBD is implemented in Julia-1.5.3 [7] based on the SDDP.jl package [12], which provides an existing implementation for SDDP. More implementation details are presented in Appendix H.

The considered unit commitment problem is formally described in detail in Appendix I. Importantly, the considered problem contains binary state variables, but also continuous state variables, such that a binary approximation of the state variables is required in the backward pass of NC-NBD. Additionally, all instances contain a nonlinear function in the objective. In the base instances, we consider a concave quadratic emission cost curve in the objective. In the valve-point instances, additionally, we consider a non-convex fuel cost curve with a sinusoidal term. In both cases, we analyze instances with 2 to 36 stages and 3 to 10 generators, resulting in 6 to 20 state variables. More details on our parameter settings and the complete test results for all instances are presented in Appendix I.

The results show that NC-NBD succeeds to solve multistage non-convex MINLPs with a moderate number of stages and state variables to (approximate) global optimality. It converges to the globally minimal point for each of the instances and, considering our 1% tolerance, terminates with valid upper and lower bounds for v.

For the base instances, we observe long computation times of several minutes compared to state-of-the-art solvers for MINLPs, which solve the problems in a few seconds, though. We address some of the reasons and possible solutions for this behavior at the end of this section. As the results for problems with a small number of state variables, but many stages look most promising, for our valve-point instance tests we focus on such instances.

For these instances, the sinusoidal terms in the objective exclude many existing general purpose solvers from application. A sample of the obtained results is presented in Table 1, for the complete ones, see Appendix I. The results show that NC-NBD is less efficient than existing solvers for problems with few stages, but becomes competitive with a larger number of stages. Especially for the instances with 36 stages, conventional solvers have difficulties closing the optimality gap while NC-NBD manages to solve the instances in reasonable time.

Table 1 Solution times in sec. for valve-point instances with three different demand time series

These results confirm that NC-NBD should be best suited for multistage problems with a large number of stages, but a relatively small number of state variables, as the obtained subproblems remain sufficiently small even for a larger number of iterations, while general purpose solvers may start to struggle due to the combination of many stages and nonlinear terms. Therefore, NC-NBD may also be useful for stochastic programs where the deterministic equivalent becomes computationally infeasible for monolith approaches. To investigate this is left for future research.

While some of the test results look promising, we still see substantial potential for improvement. This should also help to make NC-NBD more efficient and competitive for problems with a larger number of state variables. It is a known drawback of SDDiP [55], which is inherited by NC-NBD, that existing methods to solve the Lagrangian dual problems may take extremely long to converge. To some extent, this could possibly be mitigated by additionally using different cut types such as strengthened Benders cuts [55], thus, only constructing tight cuts every few iterations. Yet, developing more efficient solution methods is an important open research question.

Additionally, with each projected cut, the considered subproblems become considerably larger. While we implemented a simple cut selection scheme to reduce the subproblem size, more sophisticated approaches may be required to keep the subproblems tractable for applications with many state variables.

Finally, so far, we assume that the outer loop MINLPs are solved to global optimality (A5) directly. In a more efficient implementation of NC-NBD, these subproblems should be approximated as well.

6 Conclusion

We propose the non-convex nested Benders decomposition (NC-NBD) method to solve multistage non-convex MINLPs. The method is based on combining piecewise linear relaxations, regularization, binary approximation and cutting-plane techniques in a unique and dynamic way. We are able to prove that NC-NBD is guaranteed to compute an \(\varepsilon \)-optimal solution for the original MINLP in a finite number of steps. Computational results for some moderate-sized instances of a unit commitment problem demonstrate its applicability to multistage problems.

We require all constraints to be continuous and the objective function to be Lipschitz continuous, which are common assumptions in nonlinear optimization. We also assume complete recourse for the multistage problem. Moreover, the regularization factors are assumed to be sufficiently large to ensure exact penalization in the regularized subproblems. If this is not the case, the factors can be increased gradually within NC-NBD.

In contrast to previous approaches to solve multistage non-convex problems, we do not require the value functions to be monotonic in the state variables [36] and allow the state variables to enter not only the objective function, but also the constraints. The latter avoids the assumption of oracles to handle indicator functions [52].

In NC-NBD, we combine dynamic binary approximation of the state variables, cutting-plane techniques tailor-made for binary state variables and a projection from the binary to the original space. This way, we are able to obtain non-convex, piecewise linear cuts to approximate the non-convex value functions of multistage MILPs. Using some additional regularization, this is even possible if those value functions are not (Lipschitz) continuous. Together with piecewise linear relaxations, this yields non-convex underestimators for the non-convex value functions of MINLPs. All approximations are refined dynamically and, by careful design, it is ensured that all cuts remain valid even with such refinements.

The proposed method can be enhanced to solve stochastic MINLPs as well. In particular, a sampling-based approach like in SDDP could be used. In such case some adaptions have to be made with respect to the refinement criteria (forward solutions may remain unchanged for several iterations until the right scenarios are sampled) or the convergence checks, though.

While the presented version of NC-NBD already uses approximations which are dynamically refined, different strategies may be even more dynamic and efficient in practice. For instance, the piecewise linear relaxations could be refined dynamically in the inner loop as well.

The main drawback of NC-NBD is that the considered subproblems can become severely large, since for binary approximation, for piecewise linear approximations and for cut projection, a high number of additional variables and constraints may have to be introduced. This can become problematic, especially, if a very high binary expansion precision is required to approximate the value functions sufficiently good in the forward solutions. Recent results show that the number of binary variables K required grows linearly with the dimension \(n_t\) of state variables and logarithmically with the inverse of the binary precision \(\beta _t\) [55].

Therefore, in its current form, NC-NBD is best applicable to multistage MINLPs which are too large to solve in their extensive form, but for which each subproblem is sufficiently small and contains only a few nonlinear functions.