Skip to main content

A column generation algorithm for solving energy system planning problems

Abstract

Energy system optimization models are typically large models which combine sub-models which range from linear to very nonlinear. Column generation (CG) is a classical tool to generate feasible solutions of sub-models, defining columns of global master problems, which are used to steer the search for a global solution. In this paper, we present a new inner approximation method for solving energy system MINLP models. The approach is based on combining CG and the Frank Wolfe algorithm for generating an inner approximation of a convex relaxation and a primal heuristic for computing solution candidates. The features of this approach are: (i) no global branch-and-bound tree is used, (ii) sub-problems can be solved in parallel to generate columns, which do not have to be optimal, nor become available at the same time to synchronize the solution, (iii) an arbitrary solver can be used to solve sub-models, (iv) the approach (and the implementation) is generic and can be used to solve other nonconvex MINLP models. We perform experiments with decentralized energy supply system models with more than 3000 variables. The numerical results show that the new decomposition method is able to compute high-quality solutions and has the potential to outperform state-of-the-art MINLP solvers.

Introduction

Energy system optimization

Energy systems involve the conversion, distribution, and storage of energy. They represent complex structures of interconnected subsystems, including different technologies, actors, and markets, depending on the problem under consideration and the defined system boundary (Möst et al. 2009). Energy system studies range from technology comparisons for domestic applications (Ashouri et al. 2013) to the design of long-term transnational transformation strategies (Brown et al. 2018). When mathematical optimization methods are applied to the analysis, it is the modeler’s responsibility to select an appropriate level of detail for the investigated problem to ensure the computability of the model and the significance of the obtained results (Brooks and Tobias 1996). Hence, the modeler has to address the trade-off between model accuracy and the required solution effort. In the optimization-based process synthesis, Chen and Grossmann distinguish between aggregated models, shortcut models, and rigorous models (Chen and Grossmann 2017).

Aggregated models typically represent the underlying physical relationships in a simplified way. The problems under investigation are often embedded in large energy systems, and usually, the plant operation is considered in more detail. The resulting models are typically linear (LP) or mixed-integer linear (MILP), and commercial and non-commercial software is available to solve them effectively, e.g., (Gurobi Optimization 2020), (CPLEX IBM 2020), (CBC Forrest et al. 2020) or (SCIP Gamrath et al. 2020). Examples include design concepts for central plants to supply district heating networks (Bruche and Tsatsaronis 2019), industrial parks with electricity, heating, and cooling demands (Voll et al. 2013), and decentralized urban areas with consideration of heat network expansion (Rieder et al. 2014). Rigorous models, in contrast, feature a significantly higher level of detail of the modeled technical devices and their fundamental principles. They represent the plant behavior at the component level and take into account, for example, chemical equilibria or heat transfer phenomena. The models are typically developed using special simulation software, and for optimization, derivative-free black-box methods such as evolutionary algorithms are frequently employed. These often time-consuming techniques enable investigation of complex models but have the drawback that they do not provide quality proof of the results and may produce only local optima. Examples are the design optimization for supercritical steam power plants (Wang et al. 2014) and natural gas-fired oxyfuel plants (Teichgraeber et al. 2017).

The shortcut models are a compromise between those mentioned above. They can either be large and thus closer to the aggregated models, but take into account, for instance, more advanced (nonlinear) functions for describing the component performance or costs, e.g., (Elsido et al. 2017; Goderbauer et al. 2016). Alternatively, the models can be more comparable to rigorous models with reduced complexity to enable deterministic solving methods to handle them. Model simplification can be achieved by considering only a few characteristic load points to limit the number of variables, and introduce model equations with lower mathematical complexity, e.g., to describe the thermodynamic properties (Ahadi-Oskui et al. 2010; Jüdes et al. 2009). In both cases, the results are commonly nonlinear mathematical models (NLP). Most of them are also nonconvex and have additional integer variables to state, for example, if a component is present or absent in a design. In this case, the problem class is mixed-integer nonlinear (MINLP). There are several deterministic methods available for solving nonconvex MINLPs. They can be subdivided into (i) branch-and-bound (Burer and Letchford 2012; Sahinidis 2020), (ii) MIP approximation Goderbauer et al. (2016), and (iii) decomposition methods (Nowak et al. 2019; Muts et al. 2020b). Most of the available commercial and open-source solvers, e.g., (BARON Sahinidis 2020) or (SCIP Gamrath et al. 2020) apply the branch-and-bound method. However, finding (high quality) solutions for large-scale MINLP problems with branch-and-bound-based solvers is often difficult with limited time and memory resources, as the size of the branch and bound tree to be managed may grow with the problem dimension.

Hence, a second approach in dealing with the challenging nonlinearities in optimization models is introducing piecewise-linear MIP approximations for the nonlinear functions. The MIP approximations are usually applied manually during the model preprocessing. The variety of applications includes combined cooling, heat, and power plants (Bischi et al. 2014), energy systems with heat demands at different temperature levels (Yokoyama et al. 2002), and electrochemical conversion technologies (Gabrielli et al. 2018). However, a MIP approximation solution is not an exact solution of the nonlinear model and might even be infeasible in the original problem. Goderbauer et al. addressed this challenge with a problem-tailored adaptive algorithm for the synthesis of decentralized energy supply systems (DESS) (Goderbauer et al. 2016). Their iterative two-step process combines the identification of approximate solutions with discretized variables and the solving of NLPs with fixed selected variables. Between two iterations, the discretization grid is reduced and shifted in the direction of the previously calculated solution. With this approach, Goderbauer et al. generated high-quality MINLP solutions in short computation time and outperformed the commercial solver BARON. However, we should take into account that unlike BARON, the method is not a general purpose MINLP solver, but it has been tuned to solve specifically DESS problems.

A third deterministic method for solving MINLPs is decomposition. The authors have gained very good experience with problem decomposition via Column Generation (CG) in solving huge network problems (Borndörfer et al. 2013; Nowak 2014). Motivated by CG’s excellent performance, we present a new MINLP CG method. It is based on combining CG and the Frank-Wolfe method (FW) for generating an inner approximation (IA) of a convex relaxation, and a primal heuristic for computing solution candidates. The characteristics of this approach are:

  1. (i)

    it does not require a global branch-and-bound tree,

  2. (ii)

    sub-problems can be solved in parallel to generate columns, which do not have to be optimal, nor become available at the same time to synchronize the solution,

  3. (iii)

    an arbitrary solver can be used to solve sub-models.

The new method can be applied to general nonconvex or convex sparse real-world MINLPs, e.g. instances of the MINLPLib (Vigerske 2018). These models can be reformulated as block-separable problems, defined by low-dimensional sub-problems coupled by a moderate number of global linear constraints. In this work, we focus on the characteristics of the superstructure-based synthesis of decentralized energy supply system presented in Goderbauer et al. (2016).

Outline

First, Sect.  2 describes a practical application from the area of energy supply system modeling. Section 3 describes the decomposition view of the problem and sketches the resource-constrained approach. Section 4 presents various algorithms based on IA to solve the problem. The numerical evaluation in Sect. 5 shows the potential of the new decomposition-based approximation approach. We conclude in Sect. 6 outlining steps for future investigation.

Modeling of a practical application

Decentralized energy supply systems

A DESS is a complex, integrated system consisting of several energy conversion units and energy supply and demand forms. Usually, the starting point for DESS optimization is a superstructure containing all possible components and their interconnections. The optimization goal involves identifying a plant design and a suitable plant schedule simultaneously to minimize or maximize an objective function value. The objective function is usually of economic nature (e.g., maximizing the net present value of an investment). Binary variables (valid values one or zero) can be used to model the facility selection (existence/non-existence) and the operation states of selected units (on/off). Continuous non-negative variables represent the model component input and output streams and the component costs. Both input-output relationships and component costs can be expressed by nonlinear functions. In this case, the underlying model belongs to the class of mixed-integer-nonlinear-problems (MINLP).

General MINLP model formulation

The DESS model used in this paper was initially presented in Voll et al. (2013) and is described in detail in Goderbauer et al. (2016). For the following brief model description, the notation is adopted from Goderbauer et al. (2016). The superstructure of the model is sketched in Fig. 1.

Fig. 1
figure 1

Superstructure of the investigated DESS with one unit of each component type

Every superstructure S is assembled from the components s, namely boilers (B), combined heat and power engines (C), electrically powered compression chillers (referred to as “turbo chillers”, T), and heat-utilizing absorption chillers (A). In the base case, each component can only be installed once in a superstructure (see Sect. 2.3 for more details). The DESS is required to meet the heating (\({\dot{E}}_{\ell}^{\mathrm {heat}}\)), cooling (\({\dot{E}}_{\ell}^{\mathrm {cool}}\)), and electricity demands (\({\dot{E}}_{\ell}^{\mathrm {el}}\)) for every load case l. The latter are specified by a set of load cases L. Gas-fired boilers and combined heat and power engines are available to provide thermal energy (gas price: \(p^{\mathrm {gas,buy}}\)). The electricity produced in the engines is used to satisfy the electricity demand. Missing or excess electricity can be balanced with the connected electricity grid. Costs of \(p^{\mathrm {el,buy}}\) are incurred for purchased electricity, and the electricity sale is reimbursed with \(p^{\mathrm {el,sell}}\). Electrically powered turbo-chillers and heat-utilizing absorption chillers are available to meet the cooling demand. The maximization of the net present value is selected as the objective function. The annual costs are discounted to the investment time using the interest rate i and the cash flow time \(\gamma ^{\mathrm {CF}}\). The factor \(m_{s}\) considers the annual maintenance costs as a percentage of the investment costs \(I_{s}\).

The following variable values are determined during the optimization process:

  • existence/non-existence \(y_{s}\) of all units in the superstructure; \(y_{s} \in \{0,1\}\)

  • operation status \(\delta _{s{\ell}}\) (on/off) of all units in all load cases; \(\delta _{s{\ell}} \in \{0,1\}\)

  • input \({\dot{U}}_{s{\ell}}\) and output energy flow \({\dot{V}}_{s{\ell}}\) of all units in all load cases (main output of engines: heat; electricity output \({\dot{V}}_{s}^{\mathrm {el}}\) is derived); \({\dot{U}}_{s{\ell}}, {\dot{V}}_{s{\ell}} \ge 0\)

  • nominal size \({\dot{V}}_{s}^{\mathrm {N}}\) of all units (engines: related to heat output); investment costs \(I_{s}\) are derived from the nominal size; \({\dot{V}}_{s}^{\mathrm {N}} \ge 0\)

  • electricity import \({\dot{U}}_{\ell}^{\mathrm {el,buy}}\) and export \({\dot{V}}_{\ell}^{\mathrm {el,sell}}\) from and to the grid in all load cases; \({\dot{U}}_{\ell}^{\mathrm {el,buy}}, {\dot{V}}_{\ell}^{\mathrm {el,sell}} \ge 0\)

The objective function of the optimization problem is presented in Eq. (1). The net present value to be maximized is calculated from the discounted annual cash flows minus the overall investment costs. The annual cash flow is derived from the revenues for electricity sales minus the expenses for electricity import, purchased fuel, and maintenance costs.

$$\begin{aligned} \begin{aligned} \max&\quad \dfrac{(i+1)^{\gamma ^{\mathrm {CF}}}-1}{i\cdot (i+1)^{\gamma ^{\mathrm {CF}}}} \cdot \Bigg [ \sum _{l \in L}{\Delta }_{l} \cdot \bigg ( p^{\mathrm {el,sell}} \cdot {\dot{V}}_{l}^{\mathrm {el,sell}} - p^{\mathrm {el,buy}} \cdot {\dot{U}}_{l}^{\mathrm {el,buy}}\\&- p^{\mathrm {gas,buy}} \cdot \sum _{s \in B\cup {}C} \delta _{sl} \cdot {\dot{U}}_{s}({\dot{V}}_{sl},{\dot{V}}_{s}^{\mathrm {N}}) \bigg ) - \sum _{s \in S} m_{s} \cdot I_{s}({\dot{V}}_{s}^{\mathrm {N}})\cdot y_{s} \Bigg ]\\&- \sum _{s \in S} I_{s}({\dot{V}}_{s}^{\mathrm {N}})\cdot y_{s} \end{aligned} \end{aligned}$$
(1)

Equations (2)–(4) represent the balance constraints for the three energy demands heating, cooling, and electricity. They need to be met with equality in all load cases l.

$$\begin{aligned} {\dot{E}}_{\ell}^{\mathrm {heat}}= \sum _{s \in B\cup {}C} {\dot{V}}_{s{\ell}} - \sum _{s \in A} \delta _{s{\ell}} \cdot {\dot{U}}_{s}({\dot{V}}_{s{\ell}},{\dot{V}}_{s}^{\mathrm {N}}), \qquad\forall \quad \ell \in L \end{aligned}$$
(2)
$$\begin{aligned} {\dot{E}}_{\ell}^{\mathrm {cool}}&= \sum _{s \in A\cup {}T} {\dot{V}}_{s{\ell}}, \qquad&\forall \quad \ell \in L\end{aligned}$$
(3)
$$\begin{aligned} {\dot{E}}_{\ell}^{\mathrm {el}}&= {\dot{U}}_{\ell}^{\mathrm {el,buy}} + \sum _{s \in C} \delta _{s{\ell}} \cdot {\dot{V}}_{s}^{\mathrm {el}}({\dot{V}}_{s{\ell}},{\dot{V}}_{s}^{\mathrm {N}})\nonumber \\&\quad - \sum _{s \in T} \delta _{s{\ell}} \cdot {\dot{U}}_{s}({\dot{V}}_{s{\ell}},{\dot{V}}_{s}^{\mathrm {N}}) - {\dot{V}}_{\ell}^{\mathrm {el,sell}}, \qquad \forall \quad \ell \in L \end{aligned}$$
(4)

All components are available in continuous nominal sizes. Equation (5) bounds their size to the range between a lower and upper limit.

$$\begin{aligned} {\dot{V}}_{s}^{\mathrm {N,min}} \le {\dot{V}}_{s}^{\mathrm {N}} \le {\dot{V}}_{s}^{\mathrm {N,max}},&\qquad \forall \quad s \in S \end{aligned}$$
(5)

Equations (6)–(8) are responsible to set the output energy flow to zero if the binary operation variable is zero. Otherwise the output flow is limited to values between the minimum part-load (\(\alpha _{s}^{\mathrm {min}} \cdot {\dot{V}}_{s}^{\mathrm {N}}\)) and the nominal size. Eq. (9) assures that every unit of a superstructure can only be operated if it exists.

$$\begin{aligned} {\dot{V}}_{s{\ell}}&\le \delta _{s{\ell}} \cdot {\dot{V}}_{s}^{\mathrm {N,max}} ,&\qquad \forall \quad s \in S,\ \ \ell \in L\end{aligned}$$
(6)
$$\begin{aligned} {\dot{V}}_{s{\ell}}&\le {\dot{V}}_{s}^{\mathrm {N}},&\qquad \forall \quad s \in S,\ \ \ell \in L\end{aligned}$$
(7)
$$\begin{aligned} {\dot{V}}_{s{\ell}}&\ge \alpha _{s}^{\mathrm {min}} \cdot {\dot{V}}_{s}^{\mathrm {N}} - (1 - \delta _{s{\ell}}) \cdot \alpha _{s}^{\mathrm {min}} \cdot {\dot{V}}_{s}^{\mathrm {N,max}},&\qquad \forall \quad s \in S,\ \ \ell \in L\end{aligned}$$
(8)
$$\begin{aligned} y_{s}&\ge \delta _{s{\ell}},&\qquad \forall \quad s \in S,\ \ \ell \in L \end{aligned}$$
(9)

All parameter values, the nonlinear part-load performance curves \({\dot{U}}_{s}({\dot{V}}_{s{\ell}},{\dot{V}}_{s}^{\mathrm {N}})\), and investment cost functions \(I_{s}({\dot{V}}_{s}^{\mathrm {N}})\) are specified in the Appendix. As stated by Goderbauer et al. (2016) the MINLP model is nonconvex due to the investment cost functions and the nonlinear equality constraints (2) and (4).

DESSLib model instances

The DESS model studied here is published as library DESSLib (Bahl et al. 2016) including multiple model instances of varying complexity. The instances differ in three properties:

  • number of similar units of all component types (S4, S8, S12, S16)

  • number of load cases (L1, L2, L4, L6, L8, L12, L16, L24)

  • energy demands values (ten sampled variants)

For example, instance S8L4-0 consists of maximal two units for each of the four component types (boiler, engine, turbo-chiller, absorption chiller). Both units can have different nominal sizes and can be operated independently. Furthermore, instance S8L4-0 includes four load cases (L4), and uses the data variation with the index zero. The DESSLib is used in Goderbauer et al. (2016) to evaluate their problem-tailored method AdaptDiscAlgo (see explanation in Sect. 1) and to benchmark it with state-of-the-art solvers. The results are also accessible for download at Bahl et al. (2016). Thus, high-quality, near-optimal solutions for the applied MINLP problems are available for all model instances.

MINLP block model formulation

Our model has been developed with the Python-based algebraic modeling language Pyomo. Pyomo provides the component Block, which serves as a container for variables, constraints, and more (Hart et al. 2017). For all energy conversion units of the DESS model, i.e., for each boiler, engine, turbo-chiller, and absorption chiller, individual Pyomo blocks are created. These main blocks contain the component variables for existence, nominal size, and investment costs, and the associated nonlinear investment cost functions. Additionally, one or multiple sub-blocks are created within the main component block to store the variables and constraints with load case dependency. These include operation status variables, input and output flow variables, plus the nonlinear performance constraints, and Eq. (6)–(9). Each sub-block can contain one or more load cases. Thus, the size of the sub-blocks is adjustable and depends on the decision of the model user. A schematic representation of the modeling concept is shown in Fig. 2. Modeling with Pyomo blocks is necessary to support the automatic model decomposition. Further details are provided in Sect. 3.1.

Fig. 2
figure 2

Implemented modeling concept with the application of Pyomo Block components. In this example, each load case \(\ell\) has an individual sub-block in the unit s

MINLP formulation and approximation

Block-separable formulation

In order to apply the new decomposition method, the DESS model is formulated as a block-separable (or quasi-separable) MINLP problem, where the vector of variables \(x \in {\mathbb {R}}^n\) is partitioned into |K| blocks of dimension \(n_k\) such that \(n=\sum \limits _{k \in K}n_k\) and \(x_k\in {\mathbb {R}}^{n_k}\) denotes the variables of block k. The model is described by

$$\begin{aligned} \min \, c^Tx {{\,\mathrm{\quad s.t. \,\, }\,}}x\in P,\,\, x_k\in X_k,\,\, k\in K \end{aligned}$$
(10)

with global (coupling) linear constraints

$$\begin{aligned} P:= \{x \in {\mathbb {R}}^{n}: \, \, a_i^T x\le b_i,\, i\in M_1, \,\, a_i^T x= b_i,\, i\in M_2 \}, \end{aligned}$$
(11)

and local (block) constraints

$$\begin{aligned} X_k:=G_k\cap Y_k,\quad Y_k:={\mathbb {R}}^{n_{k1}}\times {\mathbb {Z}}^{n_{k2}}, \end{aligned}$$
(12)

with \(n_k=n_{k1}+n_{k2}\) and

$$\begin{aligned} G_k:= \{ y \in [\underline{x}_k, \overline{x}_k] \subset {\mathbb {R}}^{n_{k}}: g_{kj}(y)\le 0,\, j\in J_k \}, \end{aligned}$$
(13)

and \(|M_1|+|M_2|=m\). The vectors \(\underline{x}, \overline{x} \in {\mathbb {R}}^{n}\) denote lower and upper bounds on the variables.

The local (block) set \(X_k\) is defined by linear and nonlinear local constraint functions, \(g_{kj}: {\mathbb {R}}^{n_k}\rightarrow {\mathbb {R}}\), which are assumed to be bounded and continuously differentiable within the set \([\underline{x}_k, \overline{x}_k]\). An example of a nonlinear constraint function is depicted in Fig. 3. The linear global constraints in P are defined by \(a_i^Tx:=\sum \limits _{k\in K}a_{ki}^Tx_k\), \(a_{ki} \in {\mathbb {R}}^{n_k}, b_i \in {\mathbb {R}}, i\in [m]\). The linear objective function is defined by \(c^Tx:=\sum \limits _{k\in K}c_k^Tx_k\), \(c_k\in {\mathbb {R}}^{n_k}\).

Fig. 3
figure 3

Example of the nonlinear investment cost function of the absorption chillers (see Eq. (A.7) in the Appendix)

Blocks in set K can be detected in an automatic way or can be defined manually. One idea of automatic block detection is based on connected components of the so-called ‘sparsity graph’, defined by nonzero entries of the Hessian of constraint functions of the original MINLP (Nowak 2005). In this paper, we consider the MINLP problems (DESS models) with manually defined blocks, see Sect. 2.4 for implementation details. Some of these blocks are defined by continuous variables and by linear local constraints. We merge these blocks into one one single linear block which is represented by the following polytope

$$\begin{aligned} X_1:=\{ y \in [\underline{x}_1,\overline{x}_1]\subset {\mathbb {R}}^{n_1} : g_{1j}(y):=a_{1j}^Ty-b_{1j}\le 0,\, j\in J_1\}. \end{aligned}$$
(14)

Further details how we deal with linear block (14) are given in Sect. 4.1.1.

Resource-constrained reformulation

In order to define an inner approximation, the MINLP (10) is reformulated as a resource-constrained optimization problem by an affine transformation of the original variables \(x_k, k\in K,\) into resource variables (Muts et al. 2020a)

$$\begin{aligned} w_k:=A_kx_k,\quad A_{ki}= {\left\{ \begin{array}{ll} c_k^T &{}: i=0, \\ a_{ki}^T &{}: i \in [m]. \end{array}\right. } \end{aligned}$$
(15)

The variables \(w_k, k\in K,\) describe how the objective value and the constraint values \(a_i^Tx\) are distributed over blocks. Considering the transformed local feasible set

$$\begin{aligned} W_k:= \{ A_{k}x_k\, : \, x_k\in X_k\}\subset {\mathbb {R}}^{m+1}, \end{aligned}$$
(16)

the resource-constrained formulation of (10) is defined as

$$\begin{aligned} \min \,F(w) {{\,\mathrm{\quad s.t. \,\, }\,}}w\in H,\quad \,\,w_k\in W_k,\,\, k\in K, \end{aligned}$$
(17)

with the transformed objective function

$$\begin{aligned} F(w):=\sum _{k\in K} w_{k0}, \end{aligned}$$

and the transformed global feasible set

$$\begin{aligned} H:= \left\{ w\in {\mathbb {R}}^{(m+1)|K|} : \, \, \sum _{k\in K} w_{ki} \le b_i,\, i\in M_1,\,\, \, \, \sum _{k\in K} w_{ki} = b_i,\, i\in M_2 \right\} . \end{aligned}$$
(18)

Notice that the resource constraint view considers a space with \((m+1)|K|\) variables instead of the original n variables. For sparse MINLPs with sparse matrices \(A_k\) the number of resources variables can be greatly reduced, since only a resource variable \(w_{ki}\) is needed, if \(A_{ki}\ne 0\). For instance, let \(m=1499, |K|=50\) and number of nonzero rows of \(A_k, k \in K\) to be 100. Then the overall number of resource variables is 75000. This number is reduced to 5000, if we consider only the nonzero rows of matrix \(A_k\).

Convex hull relaxation and inner approximation

A resource-constrained convex hull relaxation of (17) is defined by

$$\begin{aligned} \min \,F(w) {{\,\mathrm{\quad s.t. \,\, }\,}}w\in H,\quad \,\,w_k\in {{\,\mathrm{conv}\,}}(W_k),\,\,k\in K. \end{aligned}$$
(19)

The quality of relaxation (19) depends strongly on the duality gap, defined by

$$\begin{aligned} \text{ gap }:={{\,\mathrm{val}\,}}(10) - {{\,\mathrm{val}\,}}(19). \end{aligned}$$
(20)

Let \(S_k:=\{y_{kj}\}_{j\in [S_k]}\subset X_k\) be a finite set of locally feasible points, called inner approximation of \(X_k\), and define the corresponding resources by \(R_k:=\{A_ky: y\in S_k\}\), called columns. An inner approximation of (19), called LP-IA, is defined by

$$\begin{aligned} \min \,F(w){{\,\mathrm{\quad s.t. \,\, }\,}}w\in H,\quad \,\,w_k\in {{\,\mathrm{conv}\,}}(R_k),\,\,k\in K. \end{aligned}$$
(21)

We use the notation \([R]:=\{1,\dots ,|R|\}\) for the index set of the finite set R, i.e. \(R=\{r_j : j\in [R]\}\). An LP formulation of (21) is given by

$$\begin{aligned} \min \,F(w(z)) {{\,\mathrm{\quad s.t. \,\, }\,}}w(z)\in H,\quad z_k\in \varDelta _{|R_k|},\,\,k\in K, \end{aligned}$$
(22)

where

$$\begin{aligned} w(z):=w_k(z_k)_{k\in K},\quad w_k(z_k):=\sum _{j\in [R_k]} z_{kj} r_{kj}, \ \ \ \ z_k \in \varDelta _{|R_k|}. \end{aligned}$$
(23)

We call \(z_{kj}\) the weight of column \(r_{kj} \in R, k \in K\) and denote by \(\varDelta _{|R_k|}\subset {\mathbb {R}}^{|R_k|}\) the standard simplex with \(|R_k|\) vertices in the following way

$$\begin{aligned} \varDelta _{|R_k|} = \{z \in {\mathbb {R}}^{|R_k|}: \sum \limits _{j \in [R_k]} z_{j} = 1, z_{j} \ge 0, j \in [R_k]\}. \end{aligned}$$
(24)

A MINLP column generation algorithm

In this section we describe a new MINLP CG method. The method, called minlpcg, solves MINLP sub-problems to generate partial feasible solutions \(x_k\in X_k\) and \(w_k\in W_k\), which in the end are combined into a global solution which also fulfills the global constraints. The direction for looking for better partial solutions are given by master problems in the resource space. The basic steps of minlpcg are:

  1. 1.

    Initialize column sets \(R_k, k\in K\).

  2. 2.

    Solve (21) for computing an approximate solution \({\tilde{w}}\) of (19) .

  3. 3.

    Project \({\tilde{w}}\) onto the feasible set finding a solution candidate \(w^*\).

  4. 4.

    Add new columns to \(R_k, k\in K\).

  5. 5.

    Repeat from step 2 until stopping criterion.

Since the sub-models of (10) represent difficult MINLP problems, it is not easy to quickly generate many columns. Therefore, we developed several strategies for accelerating the column generation (CG).

Traditional generation of columns

Dealing with linear block (sub-problem), adapting inner approximation

In the DESS models the dimension \(n_1\) of the linear block (14) is much higher than the dimension \(n_k\) of other blocks. Preliminary numerical experiments showed that, if we distinguish the linear block instead of running traditional CG over all blocks, we obtain a running time reduction in order of magnitude from 2 days towards 1-2 hours. Mathematically this can be explained by the many vertices that the polytope of a linear block has which should be generated as columns if treated as nonlinear blocks. Therefore, we use the following modified LP-IA master problem (25), for which the linear constraints of \(X_1\) in (14) are directly integrated in the LP-IA:

$$\begin{aligned} \begin{aligned} \min&\,F(w(z,x_1)) \\ {{\,\mathrm{\quad s.t. \,\, }\,}}&\sum _{k\in K} w_{ki}(z_k,x_1) \le b_i \ , \, i\in M_1,\\&\sum _{k\in K} w_{ki}(z_k,x_1) = b_i \ , \, i\in M_2,\\&z_k\in \varDelta _{|R_k|} ,\,\,k\in K\setminus \{1\},\quad x_1\in X_1, \end{aligned} \end{aligned}$$
(25)

where \(w_k(z_k,x_1)\) is defined similarly as in (23) by

$$\begin{aligned} w_k(z_k,x_1):=\left\{ \begin{array}{cl} w_k(z_k) &{} : k\in K\setminus \{1\}\\ A_1x_1 &{} : k=1. \end{array} \right. \end{aligned}$$
(26)

So we distinguish \(K\setminus \{1\}\) as the set of nonlinear blocks whereas \(x_1\in {\mathbb {R}}^{n_1}\) are the variables of the linear block, and \(X_1\) is defined by only local linear constraints as in (14).

Lemma 1

Let \(R_1=\text {vert}(W_1)\) be the set of extreme points of \(W_1\). Then problems (25) and (21) are equivalent.

Proof

By definition (14), \(X_1\) defines a polytope. Hence, \(W_1\) is a polytope defined by a linear transformation of \(X_1\), and \(W_1={{\,\mathrm{conv}\,}}(R_1)\). Therefore, \(A_1x_1\) in (26) can be replaced by \(w_1(z_1)\), which proves the statement. \(\square \)

The procedure solveinnerlp(R) solves (25). It returns a primal solution \((x_k)_{k\in K}\), where \(x_1\) is the solution of the linear block and \(x_k=x_k(z_k) \) with

$$\begin{aligned} x_k(z_k):=\sum _{j\in [S_k]} z_{kj} y_{kj}, \quad z_k \in \varDelta _{|S_k|}, \quad k\in K\setminus \{1\}, \end{aligned}$$
(27)

and \(S_k\) is the set of generated feasible points of \(X_k\) related to \(R_k\), i.e. \(r_{kj}=A_ky_{kj}\). Moreover, the procedure returns the dual values \(\mu \) for the global resource constraints.

Generation of columns solving sub-problems

The MINLP CG-algorithm generates columns by solving the following MINLP sub-problems

$$\begin{aligned} \begin{aligned} y_k \in {{\,\mathrm{argmin}\,}} d^TA_kx_k,\\&\quad x_k\in X_k \end{aligned} \end{aligned}$$
(28)

regarding a search direction \(d\in {\mathbb {R}}^{m+1}\), where d is typically defined by a dual solution \(\mu \in {\mathbb {R}}^m\) of the LP-IA (21), i.e. \(d=(1,\mu ^T)\). Notice that the result \(y_k\) corresponds to an extreme point of \(X_k\) as well as \(W_k\) and is a so-called supported Pareto point in the resource space, see Muts et al. (2020b).

The procedure solveMinlpSubProbl(d) solves (28) and is used in procedure addCol(\(d,R_k\)), described in Algorithm 1, to add a column \(A_ky_k\) to \(R_k\). Moreover, the procedure computes the reduced cost

$$\begin{aligned} \delta _k:=\max \{0,\min \{ d^Tr_k - d^TA_ky_k : r_k \in R_k \}\}, \end{aligned}$$
(29)

which is used later to measure the impact of the procedure. If \(\delta _k < 0\) for some \(k\in K\), then column \(A_ky_k\) may improve the objective value of (25). In the other case, if \(\delta =0\), the objective value of (25) cannot be changed (Nowak 2005) and the column generation algorithm can be stopped.

figure a

Algorithm 2 shows a method for generating columns by alternately solving an LP-IA master problem and generating columns using Algorithm 1. The set \({\hat{K}} \subseteq K\) is subset of the block set K. Note that columns can be generated in parallel.

figure b

Initializing columns

In order to initialize column set \(R_k\), we perform a subgradient method using Algorithm 3 for maximizing the dual of (10) regarding the global constraints:

$$\begin{aligned} L(\mu ) := \sum _{k \in K}\left( \min _{y_k\in X_k}(1, \mu ^T) A_k y_k\right) - \mu ^Tb. \end{aligned}$$
(30)

We compute the step length \(\alpha ^{p}\) by comparing the values of the function \(L(\mu )\) defined in (30) at different iterations p of Algorithm 3, similarly as in Shor (1985):

$$\begin{aligned} \alpha ^{p + 1} = {\left\{ \begin{array}{ll} 0.5 \alpha ^{p}&{}: L(\mu ^p) \le L(\mu ^{p - 1}), \\ 2 \alpha ^{p}&{}: \text {otherwise}. \end{array}\right. } \end{aligned}$$
(31)

Note that procedure addcol in Algorithm 3 can be performed in parallel.

figure c

Accelerating column generation

In order to accelerate CG without calling an MINLP routine, we developed two methods. The first method generates columns by performing NLP local search from good starting points of an IA-LP. The second method is a Frank-Wolfe algorithm for quickly solving the convex hull relaxation (19).

Fast generation of columns using NLP local search and rounding

The master problem used to provide starting points for the column generation is LP-IA problem (21). Since in the beginning only few columns are available, often LP-IA master problem (21) is infeasible, i.e. \(H\cap \prod _{k\in K} {{\,\mathrm{conv}\,}}(R_k)=\emptyset \). Therefore, we use an LP-IA master problem (32), that includes slacks and a penalty term \(\varPsi (\theta ,s)\), similar as in du Merle et al. (1999):

$$\begin{aligned} \begin{aligned} \min&\ F(w(z,x_1)) + \varPsi (\theta ,s)\\ {{\,\mathrm{\quad s.t. \,\, }\,}}&\sum _{k\in K} w_{ki}(z_k,x_1) \le b_i+ s_i \ , \, i\in M_1,\\&\sum _{k\in K} w_{ki}(z_k,x_1) = b_i+ s_{i1}-s_{i2} \ , \, i\in M_2,\\&z_k\in \varDelta _{|R_k|} ,\,\,k\in K\setminus \{1\},\quad x_1\in X_1, \\&s_i\ge 0,\quad i \in [m], \end{aligned} \end{aligned}$$
(32)

where the penalty term for slack variables is

$$\begin{aligned} \varPsi (\theta ,s):=\sum _{i\in M_1} \theta _i s_i + \sum _{i\in M_2} \theta _i (s_{i1}+s_{i2}), \end{aligned}$$
(33)

and penalty weights \(\theta > 0\) are sufficiently large. We define \(w_k(z_k,x_1)\) similarly to (23) as follows

$$\begin{aligned} w_k(z_k,x_1):=\left\{ \begin{array}{cl} w_k(z_k) &{} : k\in K\setminus \{1\},\\ A_1x_1 &{} : k=1, \end{array} \right. \end{aligned}$$

and \(x_1\in {\mathbb {R}}^{n_1}\) are the variables of the linear block, defined in (14). The procedure solveslackinnerlp(R) solves (32) and returns solution point x, dual solution \(\mu \) and slack values s. If the slack variables are nonzero, i.e. \(s\not =0\), in order to eliminate nonzero slack variables, the method slackdirections computes a new search direction \(d \in {\mathbb {R}}^m\)

$$\begin{aligned} d := \sum \limits _{\begin{array}{c} s_{i}> 0.1 \max (s), \\ i \in M_1 \end{array}} e_i + \sum \limits _{\begin{array}{c} s_{i1}> 0.1 \max (s), \\ i \in M_2 \end{array}} e_i-\sum \limits _{\begin{array}{c} s_{i2}> 0.1 \max (s), \\ i \in M_2 \end{array}} e_i, \end{aligned}$$

with \(e_i \in {\mathbb {R}}^m\) the coordinate i unit vector.

figure d

Since, for the CG algorithm, it is sufficient to compute high-quality local feasible solutions, we present a local search procedure approxsolveminlpsubprobl in Algorithm 4 based on rounding of locally feasible point. The goal of this procedure is to avoid usage of a MINLP solver for solving sub-problems and, therefore, reduce the time for sub-problem solving. The inputs of local search procedure approxsolveminlpsubprobl are the block solution \(x_k\) as starting point and the direction d or \((1,\mu )\) as search direction. It starts by running procedure solvenlpsubproblem which computes a local minimizer of the integer relaxed sub-problem

$$\begin{aligned} \begin{aligned} {\tilde{y}}_k:= {{\,\mathrm{argmin}\,}}&\ d^TA_kx \\ {{\,\mathrm{\quad s.t. \,\, }\,}}&x \in G_k \end{aligned} \end{aligned}$$
(34)

starting from the primal solution \(x_k\) of the LP-IA. Then the procedure round rounds integer variables of block k in \({\tilde{y}}_k\) to obtain \({\hat{x}}_k\). Finally, procedure solvefixednlpsubproblem solves again an NLP problem, fixing the rounded integer variables of \({\hat{x}}_k\):

$$\begin{aligned} {\tilde{x}}_k:={{\,\mathrm{argmin}\,}}\,\,&c_k^Tx_k {{\,\mathrm{\quad s.t. \,\, }\,}}x_k\in G_k, \quad x_{ki}={\hat{x}}_{ki},\quad i\in [n_{k2}], \end{aligned}$$
(35)

and using the continuous variable values of \({\tilde{x}}_k\) as starting point. The complete column generation procedure is depicted in Algorithm 5. Note that procedure approxsolveminlpsubprobl in Algorithm 5 can be performed in parallel.

figure e

CG by solving the convex hull relaxation using a Frank-Wolfe algorithm

In this section, we present a Frank-Wolfe algorithm which is an alternative way to generate columns. It is based on solving convex hull relaxation (19) by a quadratic penalty function approach:

$$\begin{aligned} Q(w,\sigma ):=F(w)+ \sum _{i\in [m]} \sigma _i \left( \sum _{k\in K} w_{ki} -b_i\right) ^2 \end{aligned}$$
(36)

where \(\sigma \in {\mathbb {R}}^m_+\) is a vector of penalty weights. Consider the convex optimization problem

$$\begin{aligned} \min Q(w,\sigma ) {{\,\mathrm{\quad s.t. \,\, }\,}}\,\,w_k\in {{\,\mathrm{conv}\,}}(W_k),\,\,k\in K. \end{aligned}$$
(37)

Let \(\mu ^*\) be an optimal dual solution of (19) regarding the global constraints \(w\in H\) and set the penalty weights \(\sigma _i=0\), if \(\mu _i^*=0\), and \(\sigma _i\ge |\mu _i^*|\), else, for \(i\in [m]\). Then it can be shown that (36) is an exact penalty function and (37) is a reformulation of the convex relaxation (19), i.e. (19) is equivalent to (37).

Algorithm 6 presents a Frank-Wolfe (FW) algorithm for approximately solving the convex penalty problem (37). For acceleration, we use the Nesterov direction update rule (Nesterov 1983), line 17. We set the penalty weight \(\sigma =|\mu |\), where \(\mu \) is a dual solution of LP-IA (32). One step of the FW-algorithm is performed by approximately solving the problem with a linearized objective

$$\begin{aligned} \min \nabla _w Q({\tilde{w}},\sigma )^Tw {{\,\mathrm{\quad s.t. \,\, }\,}}w_k \in {{\,\mathrm{conv}\,}}(W_k), \quad k\in K, \end{aligned}$$
(38)

which is equivalent to solving the sub-problems

$$\begin{aligned} \min \nabla _{w_k} Q({\tilde{w}},\sigma )^Tw_k {{\,\mathrm{\quad s.t. \,\, }\,}}w_k \in {{\,\mathrm{conv}\,}}(W_k). \end{aligned}$$
(39)

The sub-problem (39) is solved with approxsolveminlpsubprobl, depicted in Algorithm 4, in order to compute quickly new columns. The columns can be computed in parallel. Note that the gradient \(\nabla _{w_k} Q({\tilde{w}},\sigma )\) is defined by

$$\begin{aligned} \frac{\partial }{\partial w_{k0}}Q(w,\sigma )=1,\quad \frac{\partial }{\partial w_{ki}}Q(w,\sigma )= 2\sigma _i\left( \sum _{\ell \in K} w_{\ell i}-b_i\right) =:\eta _i (w,\sigma ). \end{aligned}$$

Hence, \(\nabla _{w_k}Q({\tilde{w}},\sigma )=(1,\eta ({\tilde{w}},\sigma )^T)\) for all \(k\in K\). The quadratic line search problem

$$\begin{aligned} \theta = {{\,\mathrm{argmin}\,}}Q({\tilde{w}}+t(r-{\tilde{w}}),\sigma ) \end{aligned}$$

in step 14 of Algorithm 6 can be easily solved.

figure f

A primal heuristic for finding solution candidates

In this section, we present two heuristic procedures for computing solution candidates. The first one computes a feasible solution from the slack LP-IA problem (32) solution. The second one computes high-quality solution candidates.

Algorithm 7 presents the initial primal heuristic, which aims at eliminating slacks in LP-IA master problem (32). It starts with running procedure nlpresourceproject, which performs an NLP local search solution of the following integer relaxed resource-projection NLP master problem

$$\begin{aligned} \begin{aligned} \min&\sum _{k\in K} \Vert A_kx_k-A_k{\check{x}}_k\Vert ^2,\\ {{\,\mathrm{\quad s.t. \,\, }\,}}&x\in P,\quad x_k\in G_k,\quad k\in K,\\ \end{aligned} \end{aligned}$$
(40)

where \({\check{x}}\) is the solution of the LP-IA master problem (32).

figure g

Using the potentially fractional solution \({\tilde{y}}\) of (40), the algorithm computes an integer globally feasible solution \({\hat{y}}\) by calling the procedure mipproject(\({\tilde{y}}\)) which solves MIP-projection master problem

$$\begin{aligned} \begin{aligned} {\hat{y}} = {{\,\mathrm{argmin}\,}}\,\,&\sum _{k\in K}\Vert x_k-{\tilde{y}}_k\Vert _\infty \\ {{\,\mathrm{\quad s.t. \,\, }\,}}&x\in P,\,\, \, x_k\in Y_k,\, k\in K.\\ \end{aligned} \end{aligned}$$
(41)

The integer globally feasible solution \({\hat{y}}\) is then used to perform an NLP local search, where integer variables are fixed starting from \({\hat{y}}\) by procedure solvefixednip(\({\hat{y}}\)):

$$\begin{aligned} \begin{aligned} x^*={{\,\mathrm{argmin}\,}}\,\,&c^Tx + \sum _{i\in M_1} \theta _i s_i + \sum _{i\in M_2} \theta _i (s_{i1}+s_{i2})\\ {{\,\mathrm{\quad s.t. \,\, }\,}}&\sum _{k\in K} A_kx_k \le b_i+ s_i \ , \, i\in M_1,\\&\sum _{k\in K} A_kx_k= b_i+ s_{i1}-s_{i2} \ , \, i\in M_2,\\&x_k\in G_k, x_{ki}={\hat{y}}_{ki}, i\in [n_{k2}], k\in K. \end{aligned} \end{aligned}$$
(42)

Algorithm 8 presents a primal heuristic for computing a high-quality solution candidate of MINLP problem (10). The procedure is very similar to Algorithm 7, but it does not use the NLP resource-projection problem (40). Instead, the solution of LP-IA (25) is used directly in MIP-projection problem (41). There is no guarantee that the optimal solution of (41) provides the best primal bound. Moreover, it may be infeasible for the original problem (10). Therefore, we generate a pool \({\widehat{Y}}\) of feasible solutions of problem (41) provided by the MIP solver. Solution pool \({\widehat{Y}}\) provides good starting points for an NLP local search over the global space and increases the possibility of improving the quality of solution candidate.

Similarly to Algorithm 7, Algorithm 8 starts with computing an inner LP solution \({\check{x}}\) of problem (25) by calling procedure solveinnerlp. Point \({\check{x}}\) is used in a procedure solpoolmipproject(\({\check{x}},N\)) to generate solution set (pool) \({\widehat{Y}}\) of (41) of size N, which also includes the optimal solution. Like in Algorithm 7, those alternative solutions are used to perform an NLP local search over the global space, defined in (42) by fixing the integer valued variables. In order to find better solution candidates, these steps are repeated iteratively with an updated point \({\check{x}}\). In each iteration, the point \({\check{x}}\) is shifted towards the point \(x^*\) which corresponds to the best current primal bound of the original problem (10). This is a typical heuristic local search procedure, which aims to generate a different solution pool \({\widehat{Y}}\) in each iteration of the algorithm. Algorithm 8 terminates when the maximum number of iterations is reached or the best primal bound in the current iteration does not improve the best primal bound in the previous iteration.

figure h

Main algorithm

Algorithm 9 describes a MINLP-CG method for computing a solution candidate of (10). The algorithm starts with the initialization of IA with the procedure iainit (Algorithm 3). Since the problem (32) might have nonzero slack values, the algorithm tries to eliminate them by computing a first primal solution. This is done by alternately calling the procedures approxcolgen (Algorithm 5) and findsolutioninit (Algorithm 7). For quickly improving the convex relaxation (19), the algorithm calls FW-based column generation procedure fwcolgen (Algorithm 6).

In the main loop, the algorithm alternately performs colgen (Algorithm 2) and heuristic procedure findsolution (Algorithm 8) for computing solution candidates. The procedure colgen is performed for a subset of blocks \({\hat{K}} \subseteq K \setminus {\{1\}}\), in order to keep the number of solved MINLP sub-problems low. Moreover, focusing on a subset of blocks helps avoiding computing already existing columns. The blocks can be excluded for a while by looking at the value of the reduced cost \(\delta _k, k \in K\setminus {\{1\}}\), which is computed in line 14, as defined in (29). The reduced block set \({\hat{K}}\) contains the blocks where the reduced cost is negative, i.e. \(\delta _k < 0, k \in K\setminus {\{1\}}\), and is updated at each main iteration by solving the sub-problems for the full set K. Note that if the reduced cost \(\delta _k\) is nonnegative for all blocks, i.e. \(\delta _k \ge 0, k \in K\setminus {\{1\}}\), the Column Generation has converged and the algorithm terminates.

figure i

Convergence analysis

Column Generation and Frank-Wolfe algorithms are well-known approaches and their convergence has already been proven. In this section, we discuss the convergence of Column Generation in Algorithm 9 and the Frank-Wolfe method in Algorithm 6.

Convergence of Algorithm 9

The convergence proof of the Column Generation algorithm is due to its equivalence to the dual cutting-plane algorithm, see Lemma 4.10 in Nowak (2005). Note that the proof is not based on the computation of reduced cost \(\delta _k, k \in K\), defined in (29). However, it can be used for measuring the impact of the new columns and as a criterion for algorithm termination, see Nowak (2005). For the convergence proof, we assume that all LP master problems (21) and MINLP sub-problems (28) are solved to optimality. Since Algorithm 2 is performed regarding the subset of the blocks \({\hat{K}}\), we ensure that main Algorithm 9 converges by performing a standard CG step in line 14 regarding all blocks. Note that the direct integration of the linear block (14) into LP-IA master problem (25) is equivalent to performing the CG algorithm regarding that block until convergence, as shown in Lemma 1.

Proposition 1

Let \(x^p\) be the solution of LP-IA (21) at the p-th iteration of Algorithm 9 at line 12 and \(\nu ^*\) be the optimal value of convex hull relaxation (19). Then \(\lim \limits _{p \rightarrow \infty }c^Tx^p=\nu ^*\).

Proof

The proof is equivalent to the proof of Proposition 4.11 in Nowak (2005). \(\square \)

Convergence of Algorithm 6

Algorithm 6 combines the original Frank-Wolfe algorithm (see Algorithm 1 in Jaggi (2013)) with Nesterov update rule (Nesterov 1983). The approach proposed by Nesterov in Nesterov (1983) has a convergence rate \(O(1/p^2)\), whereas the original Frank-Wolfe algorithm has a slower convergence rate of O(1/p), see Theorem 1 in Jaggi (2013). In order to prove the convergence of Algorithm 6, we assume that all sub-problems (39) are solved to global optimality. Next, we state that Algorithm 6 has a convergence rate \(O(1/p^2)\).

Proposition 2

Let \(\nu ^p:=Q({\tilde{w}}^p,\sigma )\) be the value of the quadratic penalty function (36) at p-th iteration of Algorithm 6 and \(\nu ^*\) be the optimal value of convex hull relaxation (19). Assume that \(\sigma _i \ge |\mu ^*_i|, i \in [m]\), where \(\mu ^*\) defines an optimal dual solution of (19). Then there exist a constant C such that \(\forall p\ge 0\)

$$\begin{aligned} \nu ^p - \nu ^* \le \dfrac{C}{(p + 2)^2}. \end{aligned}$$

Proof

The proof is equivalent to the proof of Theorem 1 in Mouatasim and Farhaoui (2019). \(\square \)

Numerical results

In this section, we evaluate the performance of Algorithm 9 solving several DESS model instances, described in Sect. 2. Algorithm 9 was implemented with Pyomo (Hart et al. 2017), an algebraic modeling language in Python, as part of the parallel MINLP-solver (Decogo Nowak et al. 2018). In the current implementation of Decogo, the sub-problems are not solved in parallel. The solver utilizes SCIP 7.0.0 (Gamrath et al. 2020) for solving MINLP sub-problems, GUROBI 9.0.1 (Gurobi Optimization 2020) for solving MIP/LP master-problems and IPOPT 3.12.13 (Wächter and Lorenz 2006) for performing an NLP local search in master and sub-problems. All computational experiments were performed using a computer with AMD Ryzen Threadripper 1950X 16-Core 3.4 GHz CPU and 128 GB RAM. For the experiments with DESS instances, the blocks K were defined manually using Pyomo Block component (Hart et al. 2017), for more details see Sect. 2.4.

For the experiments with Algorithm 9, we use the following stopping criteria, unless another is mentioned:

  • Time and iteration limit in Algorithm 9 is set to 12 hrs and 20 iterations, respectively;

  • The iteration limit of Algorithm 2 is set to 5;

  • The iteration limit of the outer and inner loop of Algorithm 6 is set to 5 and 10, respectively;

  • In Algorithm 8, the iteration limit is set to 5, the pool size to \(N=100\) and \(\tau =0.5\). To generate a solution pool with Gurobi, we used a parameter value of poolsearchmode=1. In this approach, the solver computes a set of N high-quality solutions including an optimal one, see Gurobi Optimization (2020) for more details;

  • For SCIP we set the maximum number of processed nodes after the last improvement of the primal bound to 1000, since, for CG, it is sufficient to compute good feasible solutions, for more details see Muts et al. (2020a).

Our main research question is how to exploit the decomposable structure of DESS models to generate a new approach which may be an alternative to state-of-the-art software like BARON. During the investigation, we developed several elements to speed up the computation. To evaluate their effect, first in Sect. 5.1 we measure the effect of distinguishing the linear block and adding the fast FW approach for generating promising columns. The idea to use a pool of high-quality MIP solutions is numerically investigated in Sect. 5.2. The pool generates a group of starting points to look for feasible solutions. That does not only affect the quality of the reached best solution, but also provides alternatives for the design problem that may be analyzed by the decision maker. Finally in Sect. 5.3, we compare the outcomes and the solution process of Algorithm 9 to that of the state-of-the-art solver BARON and to the problem-tailored MIP approximation algorithm presented in Goderbauer et al. (2016).

Effect of linear block integration into LP-IA and fast CG with the Frank-Wolfe approach

Fig. 4
figure 4

Convergence of the IA objective of Algorithm 9 with and without linear block integration for instance S16L16-1

To illustrate the impact of direct integration of one linear block into the LP IA master problem, as defined in (25), we compare two runs of Algorithm 9 on the same instance: The first run uses the LP-IA formulation given by (25) and the second uses the formulation given by (22). The comparison was done for instance S16L16-1, see Table 6 for more details.

For this particular instance, approximately 37% of all variables belong to the linear block. Whereas the CG algorithm for formulation (25) relies on generating linear block polytope vertices as columns, the second formulation (22) may save this effort. The difference can be observed in Fig. 4; the running time of Algorithm 9 with the direct linear block integration is drastically reduced from two days to approximately two hours. It also shows a significant improvement of the convergence speed of the IA objective value and a significant improvement of its final value.

Fig. 5
figure 5

Convergence of the IA objective value of Algorithm 9 for S4L4-1 with and without the fast FW column generation

The next question focuses on the the convergence impact of including procedure fwcolgen (Algorithm 6) into Algorithm 9. To measure the effect, we perform two equivalent runs of Algorithm 9; one of those does not use procedure fwcolgen. We only focus on the convergence of the IA and don’t perform procedure findsolution (Algorithm 8). The test instance is S4L4-1, see Table 6 for more details.

Figure 5 shows that the IA objective value (25) converges faster with procedure fwcolgen at the initial stage of Algorithm 9. Moreover, it appears that in the beginning of the algorithm, the fwcolgen IA objective value is very close to the final IA objective value, i.e. after 50 seconds, the IA objective value with procedure fwcolgen is approximately 0.5 % worse than the final IA objective value. Figure 5 indicates that Algorithm 9 with procedure fwcolgen converges slower to the final IA objective value than without procedure fwcolgen. The reason for that is that fwcolgen fails to generate high-quality columns in the later stages of the algorithm. This is due to the fact that we have no guarantee that penalty weights \(\sigma \), defined in (36), are sufficiently large. Another reason is that the procedure generates columns by solving NLP sub-problems heuristically. However, it has an advantage in speed when comparing column generation by solving MINLP sub-problems.

Impact of using the solution pool in Algorithm 8

This section focuses on using the solution pool in the heuristic procedure findsolution (Algorithm 8). Note that using a single integer feasible point, we perform local search with the fixed NLP (42). This may imply that the result is not a feasible solution of the problem. Using a pool of starting points, may alleviate this effect.

Fig. 6
figure 6

Solution pool for S16L16-1

Figure 6 shows the objective function values of all pool solutions, generated for instance S16L16-1. The solution with the best primal bound of approximately \(-5.15\times 10^{7}\) is identified in the seventh iteration of Algorithm 9. The component configuration of this solution is specified in Table 1. In this solution, all four available units of all component types are selected. From Fig. 6, one can notice that other high-quality solutions could be computed in the earlier iterations of the algorithm. Note that the MIP solver was not always able to provide a solution pool of prescribed size of \(N=100\). Instead it computed a solution pool of smaller size. Therefore, in Fig. 6 one can observe that the number of generated feasible solutions varies over the iterations. Moreover, Fig. 6 indicates that a solution pool may contain many feasible points with a very similar objective function value. These points are distinct, since GUROBI adds only a feasible point to the solution pool if the value of its integer variables is different for at least one integer variable Gurobi Optimization (2020).

Table 1 Nominal equipment sizes for all component types and unit numbers for the best solution computed with algorithm 9 for instance S16L16-1
Fig. 7
figure 7

DESS configurations for all solutions of the solution pool of instance S16L16-1 within a range of 2% from the solution with the best primal bound

One advantage of the solution method is that a variety of feasible solutions are generated during the process. These can be stored in the solution pool. Often near-optimal solutions are valuable for the user to gain a more profound knowledge of the optimization problem Voll et al. (2015). In addition, if they better satisfy requirements not included in the mathematical model (e.g., safety, maintainability, and operability) near-optimal solutions might be finally selected in the design of energy systems instead of an optimal one Bejan et al. (1996). Figure 7 shows the DESS configurations of all solutions of the solution pool of instance S16L16-1 with a primal bound differing at most 2% from the best solution. In the case of instance S16L16-1, this range comprises 95 solutions. In Fig. 7 they are sorted by their primal bound values. Thus the solution from Table 1 is shown on the very left. First, it is apparent that all four available units of each component type are selected in almost all cases. In further investigations, it could be analyzed whether this indeed provides advantages regarding the objective function value or if this behavior results from the the solution procedure. For the boiler and the CHP engine, the total installed capacity is mainly focused on one or two units, and the other units are installed with a smaller or minimal size. For the cooling equipment, the total capacity tends to be more evenly distributed among all units. This may be explained by their distinct part-load behavior. The configuration of most solutions is relatively similar. In the considerably different solution numbers 10 and 12, it is indicated that a shift of the cold supply from heat-driven absorption chillers to a higher proportion of electric turbo chillers is possible with only small reductions in the objective function value. This simultaneously reduces the required boiler capacity and increases the capacity of the CHP engines to cover the increased internal electricity demand.

Comparison to other approaches

In this section, we compare the performance and solution quality of Algorithm 9 to other solvers on 12 selected instances from the DESSLib library (Bahl et al. 2016). The test instances were selected varying the number of unit components and number of load cases, as described in Sect. 2.3. Instance characteristics are reported in Table 6. The results of Algorithm 9 are compared to results obtained by the state-of-the-art MINLP solver BARON 20.4.14 (Sahinidis 2020). We also compare results provided by adaptive discretization MINLP algorithm (AdaptDiscAlgo) reported in Goderbauer et al. (2016). We do not compare the performance of Algorithm 9 to AdaptDiscAlgo, since the implementation of AdaptDiscAlgo is not publicly available. Therefore, we analyze the results by looking at the primal bounds of AdaptDiscAlgo, reported in Bahl et al. (2016). Note that AdaptDiscAlgo does not provide a valid lower bound and cannot be applied to general MINLP problems.

In this analysis, we use a 12 hour time limit for BARON and Algorithm 9. For the quality evaluation of a feasible solution, we define the gap to a reference (base) objective function value b as

$$\begin{aligned} \text {gap}(a, b) = - 100\dfrac{a-b}{\max \{|a|,|b|\} + 10^{-7}}, \end{aligned}$$
(43)

where a is an objective value of the feasible solution point. Note that a “−” is used before the term in (43), since we are considering a maximization problem. In case of minimization, it is omitted.

Table 2 compares the primal solution quality of decomposition Algorithm 9 versus that of BARON and AdaptDiscAlgo using gap value (43). Let \(\nu _{\textsc {cg}}\) and \(\nu _{\textsc {b}}\) be the primal bound of CG Algorithm 9 and BARON, respectively. For the sake of simplicity, let \(\nu ^*\) denote the best known primal bound among BARON and AdaptDiscAlgo. Table 2 also presents the duality gap of Algorithm 9 and BARON. Let \(\underline{\nu }_{\textsc {cg}}\) denote the dual bound of Algorithm 9 defined by the objective value of the last solution of (25) and \(\underline{\nu }_{\textsc {b}}\) denote the dual bound of BARON given by the lower bound provided by BARON.

Table 2 Solution quality comparison of Algorithm 9 solution \(\nu _{\textsc {cg}}\) to the primal BARON solution BARON \(\nu _{\textsc {b}}\) and best known solution \(\nu ^*\). Note that negative value means that the primal bound has been improved. All values are given as percentage

Table 2 shows that for five instances, Algorithm 9 improves the primal bound of BARON. Even though AdaptDiscAlgo is specially tailored to DESS, the Algorithm 9 result differs at most 3 % from the best known bound and for one instance improves the solution. We also evaluated the solution after performing two main iterations with Algorithm 9. The solution quality is relatively good. This means that with a limited solution time, relatively good solutions can be generated after two iterations. For the instances with more than 500 variables, the duality gap of Algorithm 9 is smaller that of BARON. One can also observe that the duality gap value of Algorithm 9 depends mostly on the number of unit components (parameter S) and is less sensitive to the number of load cases (parameter L).

Table 3 gives the number of iterations \(N_{iter}\) performed by Algorithm 9 and the running time of both algorithms, Algorithm 9 to BARON.

Table 3 Performance comparison of Column Generation Algorithm 9 and BARON

Despite the fact that Algorithm 9 does not solve the sub-problems in parallel and it is implemented with Python, Table 3 shows that for several larger instances it requires less time than BARON. Moreover, we can see that for some instances Algorithm 9 did not reach the maximum number of iterations and the time limit. This indicates that apparently the Column Generation has converged.

We are interested in the computing time with increasing size of the problem instance without varying the number of iterations. The estimated running time of Algorithm 9 after two main iterations is sketched in Fig. 8. Moreover, it sketches the actual time for initializing the Inner Approximation (all procedures before entering into main iterations of Algorithm 9). Note that the total time for finishing two iterations includes the actual time for the CG initialization. The idea of time estimation for two main iterations is based on the assumption that, for each instance, in each call of findsolution (Algorithm 8), we obtain a solution pool by a MIP solver of size 500 (\(500 = 5 \cdot 100\), where 5 is the prescribed number of iterations of Algorithm 8 and 100 is a prescribed size of the solution pool). In fact, the solution pool provided by a MIP solver can be smaller than the prescribed number, see Fig. 6. Therefore, the estimated time indicates how long it would take if we would need to call the NLP solver 500 times to solve fixed NLP problem (42) in the first two main iterations of Algorithm 9. This time could be also useful for estimating the time complexity of similar instances with a much larger size. From Fig. 8, one can notice that the time for initialization of Column Generation is relatively small in comparison to the estimated time after performing two main iterations of Algorithm 8. Figure 5 illustrates that the algorithm quickly computes a good CG bound in the initial stage of the algorithm. Figure 8 indicates that findsolution (Algorithm 8) has a big influence on the running time of the entire algorithm.

Fig. 8
figure 8

Algorithm 9 computing time versus problem size of all instances in Table 3

Conclusions

Energy system optimization models combine potentially difficult sub-models into a global system. Due to their dimension, such systems may be difficult to solve by generic solvers based on a single branch-and-bound tree like BARON. Decomposition appears to be an appropriate concept to apply to such models due to the structure of sub-models and global constraints. Our research question is how to do this in an efficient way.

In our investigation, we looked into the potential of using Column Generation (CG). The idea is to generate feasible solutions of sub-models, defining columns of a global master problem, which is used to steer the search for a global solution. The master problem is based on an easy to solve inner approximation (IA) of a convex hull relaxation. One of our findings is, that it is more efficient to deal with blocks that have a linear structure in a separate way and include them directly in the master problem instead of generating columns for them. For speeding up column generation, we developed a fast Frank-Wolfe (FW) algorithm to generate feasible solutions of sub-problems. Solution candidates are computed by solving NLP problems with fixed integer variables regarding a solution pool of a MIP projection master problem.

Typical features of this approach for solving energy system MINLP models are: (i) no global branch-and-bound tree is used, (ii) sub-problems can be solved in parallel to generate columns, which do not have to be optimal, nor become available at the same time to synchronize the solution, (iii) an arbitrary solver can be used to generate solutions of sub-models, (iv) the approach (and the implementation) is generic and can be used to solve other nonconvex MINLP models, (v) the process generates feasible, not necessarily optimal, solutions during the algorithm run, which may be inspected by the decision maker.

Notice that this way of working provides a generic procedure where the sub-problems may have a black-box structure. In extremis, one may modify the optimization model during the solution process. The generated columns and solutions can also be used for performing a warm-start if the model has been changed slightly, e.g. in a dynamic optimization model with a moving horizon.

Experiments with instances of several hundreds and thousands of variables show that standard software like BARON is faster for smaller problems and reaches good solutions. We should keep in mind that, in the current Python-based implementation, we do not solve the sub-problems in parallel. The interesting notice is that for the largest DESS models we obtained significantly better dual bounds and slightly better primal bounds than BARON. Hence, for larger models the presented decomposition approach is indeed able to reach better solutions for problems with thousands of variables the energy system models usually consist of.

References

Download references

Acknowledgements

This work has been funded by Grants 03ET4053A and 03ET4053B of the German Federal Ministry for Economic Affairs and Energy, Grant 01IS19079 of the Federal Ministry of Education and Research and Grant RTI2018-095993-B-I00 from the Spanish Ministry in part financed by the European Regional Development Fund (ERDF). The funding is gratefully acknowledged.

Funding

Open Access funding enabled and organized by Projekt DEAL.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ivo Nowak.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

Appendix A Model parameters and equipment constraints

For the sake of completeness, the equations of the employed technical equipment models are given below. These include the non-linear performance maps in Equations (44)–(45) and the investment cost functions in Equations (46)–(52). The applied technical parameters are given in Table 4. Moreover, Table 5 contains the economic parameters of the DESS model. All parameter values and equations are obtained from Goderbauer et al. (2016).

Part-load performance equations: (44)–(48)

\({\hbox {Boiler }(\forall s \in B)}\):

$$\begin{aligned} \begin{aligned} {\dot{U}}_{s}({\dot{V}}_{s{\ell}},{\dot{V}}_{s}^{\mathrm {N}}) = \frac{1}{\eta ^{\mathrm {N,B}}}\cdot \left( \mathrm {c}_{1}^{\mathrm {B}} \cdot \frac{{\dot{V}}_{s{\ell}}^{2}}{{\dot{V}}_{s}^{\mathrm {N}}} + \mathrm {c}_{2}^{\mathrm {B}} \cdot {\dot{V}}_{s{\ell}} + \mathrm {c}_{3}^{\mathrm {B}} \cdot {\dot{V}}_{s}^{\mathrm {N}} \right) ,\\ \eta ^{\mathrm {N,B}}=0.9, \quad \mathrm {c}_{1}^{\mathrm {B}}=0.1021, \quad \mathrm {c}_{2}^{\mathrm {B}}=0.8355, \quad \mathrm {c}_{3}^{\mathrm {B}}=0.0666 \end{aligned} \end{aligned}$$
(44)

\({\hbox {Absorption chiller }(\forall s \in A)}\):

$$\begin{aligned} \begin{aligned} {\dot{U}}_{s}({\dot{V}}_{s{\ell}},{\dot{V}}_{s}^{\mathrm {N}}) = \frac{1}{\mathrm {COP}^{\mathrm {N,A}}}\cdot \left( \mathrm {c}_{1}^{\mathrm {A}} \cdot \frac{{\dot{V}}_{s{\ell}}^{2}}{{\dot{V}}_{s}^{\mathrm {N}}} + \mathrm {c}_{2}^{\mathrm {A}} \cdot {\dot{V}}_{s{\ell}} + \mathrm {c}_{3}^{\mathrm {A}} \cdot {\dot{V}}_{s}^{\mathrm {N}} \right) ,\\ \mathrm {COP}^{\mathrm {N,A}}=0.67, \quad \mathrm {c}_{1}^{\mathrm {A}}=0.8333, \quad \mathrm {c}_{2}^{\mathrm {A}}=-0.0833, \quad \mathrm {c}_{3}^{\mathrm {A}}=0.25 \end{aligned} \end{aligned}$$
(45)

\({\hbox {Turbo chiller }(\forall s \in T)}\):

$$\begin{aligned} \begin{aligned} {\dot{U}}_{s}({\dot{V}}_{s{\ell}},{\dot{V}}_{s}^{\mathrm {N}}) = \frac{1}{\mathrm {COP}^{\mathrm {N,T}}}\cdot \left( \mathrm {c}_{1}^{\mathrm {T}} \cdot \frac{{\dot{V}}_{s{\ell}}^{2}}{{\dot{V}}_{s}^{\mathrm {N}}} + \mathrm {c}_{2}^{\mathrm {T}} \cdot {\dot{V}}_{s{\ell}} + \mathrm {c}_{3}^{\mathrm {T}} \cdot {\dot{V}}_{s}^{\mathrm {N}} \right) ,\\ \mathrm {COP}^{\mathrm {N,T}}=5.54, \quad \mathrm {c}_{1}^{\mathrm {T}}=0.8119, \quad \mathrm {c}_{2}^{\mathrm {T}}=-0.1688, \quad \mathrm {c}_{3}^{\mathrm {T}}=0.3392 \end{aligned} \end{aligned}$$
(46)

\({\hbox {CHP engine }(\forall s \in C)}\):

$$\begin{aligned}&\begin{aligned} {\dot{U}}_{s}({\dot{V}}_{s{\ell}},{\dot{V}}_{s}^{\mathrm {N}}) = \mathrm {c}_{1}^{\mathrm {C}} + \mathrm {c}_{2}^{\mathrm {C}} \cdot \frac{{\dot{V}}_{s{\ell}}}{{\dot{V}}_{s}^{\mathrm {N}}} + \mathrm {c}_{3}^{\mathrm {C}} \cdot {\dot{V}}_{s}^{\mathrm {N}} + \mathrm {c}_{4}^{\mathrm {C}} \cdot \left( \frac{{\dot{V}}_{s{\ell}}}{{\dot{V}}_{s}^{\mathrm {N}}} \right) ^{2} + \mathrm {c}_{5}^{\mathrm {C}} \cdot {\dot{V}}_{s{\ell}} + \mathrm {c}_{6}^{\mathrm {C}} \cdot \left( {\dot{V}}_{s}^{\mathrm {N}} \right) ^{2} ,\\ \mathrm {c}_{1}^{\mathrm {C}}=550.3, \quad \mathrm {c}_{2}^{\mathrm {C}}=-1328, \quad \mathrm {c}_{3}^{\mathrm {C}}=-0.4537, \\ \mathrm {c}_{4}^{\mathrm {C}}=668.3, \quad \mathrm {c}_{5}^{\mathrm {C}}=2.649, \quad \mathrm {c}_{6}^{\mathrm {C}}=9.571\cdot 10^{-5}\\ \end{aligned} \end{aligned}$$
(47)
$$\begin{aligned}\begin{aligned} {\dot{V}}_{s}^{\mathrm {el}}({\dot{V}}_{s{\ell}},{\dot{V}}_{s}^{\mathrm {N}}) = \mathrm {c}_{7}^{\mathrm {C}} + \mathrm {c}_{8}^{\mathrm {C}} \cdot \frac{{\dot{V}}_{s{\ell}}}{{\dot{V}}_{s}^{\mathrm {N}}} + \mathrm {c}_{9}^{\mathrm {C}} \cdot {\dot{V}}_{s}^{\mathrm {N}} + \mathrm {c}_{10}^{\mathrm {C}} \cdot \left( \frac{{\dot{V}}_{s{\ell}}}{{\dot{V}}_{s}^{\mathrm {N}}} \right) ^{2} + \mathrm {c}_{11}^{\mathrm {C}} \cdot {\dot{V}}_{s{\ell}} + \mathrm {c}_{12}^{\mathrm {C}} \cdot \left( {\dot{V}}_{s}^{\mathrm {N}} \right) ^{2} ,\\ \mathrm {c}_{7}^{\mathrm {C}}=518.8, \quad \mathrm {c}_{8}^{\mathrm {C}}=-1203, \quad \mathrm {c}_{9}^{\mathrm {C}}=-0.5361, \\ \mathrm {c}_{10}^{\mathrm {C}}=579.3, \quad \mathrm {c}_{11}^{\mathrm {C}}=1.464, \quad \mathrm {c}_{12}^{\mathrm {C}}=7.728\cdot 10^{-5}\\ \end{aligned} \end{aligned}$$
(48)

Investment cost equations: (49)–(52)

\({\hbox {Boiler }(\forall s \in B)}\):

$$\begin{aligned} \begin{aligned} I\left( {\dot{V}}_{s}^{\mathrm {N}}\right) = 1.85484 \cdot \Big [\big (&11418.6 + 64.115 \cdot {\dot{V}}_{s}^{\mathrm {N} \ 0.7978}\big ) \cdot 1.046 \cdot \big (\\&1.0917 - 1.1921 \cdot 10^{-6} \cdot {\dot{V}}_{s}^{\mathrm {N}} \big )\Big ] \end{aligned} \end{aligned}$$
(49)

\({\hbox {Absorption chiller }(\forall s \in A)}\):

$$\begin{aligned} \begin{aligned} I\left( {\dot{V}}_{s}^{\mathrm {N}}\right) = 0.50401 \cdot 17554.18 \cdot {V}_{s}^{\mathrm {N} \ 0.4345} \end{aligned} \end{aligned}$$
(50)

\({\hbox {Turbo chiller }(\forall s \in T)}\):

$$\begin{aligned} \begin{aligned} I\left( {\dot{V}}_{s}^{\mathrm {N}}\right) = 0.8102 \cdot {V}_{s}^{\mathrm {N}} \cdot \left( 179.63 + 4991.3436 \cdot {V}_{s}^{\mathrm {N} \ -0.6794}\right) \end{aligned} \end{aligned}$$
(51)

\({\hbox {CHP engine }(\forall s \in C)}\):

$$\begin{aligned} \begin{aligned} I\left( {\dot{V}}_{s}^{\mathrm {N}}\right) = 9332.6 \cdot \left( {V}_{s}^{\mathrm {N}} \cdot \frac{\eta _{s}^{\mathrm {N,el}}({\dot{V}}_{s}^{\mathrm {N}})}{\eta _{s}^{\mathrm {N,th}}({\dot{V}}_{s}^{\mathrm {N}})} \right) ^{0.539}, \\ \eta _{s}^{\mathrm {N,th}}({\dot{V}}_{s}^{\mathrm {N}})=0.498 - 3.55 \cdot 10^{-5} \cdot {\dot{V}}_{s}^{\mathrm {N}}, \qquad \eta _{s}^{\mathrm {N,el}}({\dot{V}}_{s}^{\mathrm {N}})=0.87 - \eta _{s}^{\mathrm {N,th}}({\dot{V}}_{s}^{\mathrm {N}}) \end{aligned} \end{aligned}$$
(52)
Table 4 Parameter values for all types of technical equipment (minimal and maximal nominal size \({\dot{V}}^{\mathrm {N}}\), annual maintenance cost share m and minimum part-load factor \(\alpha \))
Table 5 Economic parameters of the DESS model

Appendix B Instances used for numerical experiments

The following table contains characteristics of instances that were used for comparison of Algorithm 9 to BARON (Sahinidis 2020) and to the MIP approximation algorithm (Goderbauer et al. 2016) in Sect. 5.3.

Table 6 Characteristics of selected test instances. Note that |K| denotes number of blocks. The number of units applies to all four component types

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Muts, P., Bruche, S., Nowak, I. et al. A column generation algorithm for solving energy system planning problems. Optim Eng (2021). https://doi.org/10.1007/s11081-021-09684-2

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s11081-021-09684-2

Keywords

  • Decomposition method
  • Parallel computing
  • Column generation
  • Nonconvex optimization
  • Global optimization
  • Mixed-integer nonlinear programming