1 Introduction

The physical and virtual connectivity of systems steadily increases in order to increase throughput, performance, and safety (Engell et al. 2016). The resources that connect the different sub-systems can be flows of energy, flows of material, flows of information, or more specific constraints, as quantitative restrictions for hazardous substances, or certificates, as e.g. a given amount of \(\mathrm {CO}_{2}\)-certificates that cannot be exceeded (Rius-Sorolla et al. 2020).

Of special interest are systems where the resources are not only shared bilaterally but among several sub-systems. Such system arise in the process industry, where several units are connected via networks of energy and material (Jose and Ungar 2000; Wenzel et al. 2017), in power systems, where the economic dispatch between different electricity generation facilities has to be coordinated to satisfy the network demands (Wang et al. 1995; Zhang et al. 2013) or where demand response of small loads such as residential smart appliances are integrated into the network (Gatsis and Giannakis 2013; Safdarian et al. 2014), in information technology, for instance when coordinating bandwidth between different agents (Hasan et al. 2014; Koutsopoulos and Iosifidis 2010), or when coordinating different autonomous systems such as robots or vehicles to maintain connectivity constraints (Cortés et al. 2004; Galceran and Carreras 2013).

The challenge with such systems is that often a monolithic optimization is not possible. The optimization of a whole chemical site with different ownership of the plants, for instance, requires a limited flow of information to maintain the confidentiality of business data or technical information such as prices, production targets, capacities, efficiencies, etc. Another reason is that in many cases the autonomy and decision-making power should remain with the respective sub-systems, which is not the case if a third party explicitly determines the operating conditions and the coordination of the resources between the sub-systems. If the scale of the resulting optimization problem is large, transparency of the results can be limited, as root causes are difficult to trace across sub-system boundaries (Tang et al. 2018). Furthermore, the implementation of monolithic solutions for resource allocation requires the separation of the models from the sub-systems, which constitutes an additional source of error if the models have to be maintained in different locations. Lastly, if the optimization is carried out for control purposes, solutions that reflect the modularity of the system and provide redundancy are often more desirable than a monolithic optimization (Camponogara et al. 2002; Cheng et al. 2007; Christofides et al. 2013; Farokhi et al. 2014; Maestre and Negenborn 2014; Scattolini 2009; Van Parys and Pipeleers 2017).

As a consequence, interconnected systems are often optimized on two levels: On the sub-system level, each sub-system tries to maximize its performance according to given boundary conditions, and on the system-wide or coordinator level, where the optimal allocation of shared resources is performed (Mesarovic et al. 1970). Such bilevel problems can be solved via iterative distributed optimization approaches, i.e. primal-based and dual-based decomposition methods which are compared for instance in Conejo et al. (2006), Palomar and Chiang (2006, 2007). The most suitable methods to tackle the aforementioned challenges, in particular the confidentiality issue, are dual-based decomposition methods where the coordinator broadcasts information such as prices to the different sub-systems, whereupon these respond with their estimated usage of the shared resources, and the coordinator iteratively adapts the prices until the resource balances are met.

While methods for the distributed optimization of steady state problems have been thoroughly investigated, dynamic systems have mostly been considered in the context of distributed model predictive control. Most of the work in this area considers the control of continuous processes with linear dynamics, cf. the survey in Negenborn and Maestre (2014). To the authors’ knowledge, little work has been done to compare different distributed optimization methods for coordinating resources among distributed dynamic systems.

The application that we discuss here is the dynamic resource allocation for coupled chemical reactors that are operated in semi-batch mode, i.e., some substances are filled into the reactor at the start while others are dosed during the batch run. At the end of the batch run, the reaction mixture which contains the desired product is withdrawn. We focus on how to share a limited amount of a resource, in this case, the feed flow to the reactors between the semi-batch reactors, in order to determine an optimal overall operation for given initial conditions and a given production schedule. This is a common challenge in the process industry when large quantities of products are produced in multi-product multi-batch plants and suitable production sequences have to be defined and executed (Nie et al. 2015).

1.1 Contribution

The goal of this contribution is to provide a distributed optimization strategy for semi-batch processes with end-point constraints that are subject to overarching constraints and to compare different algorithms for the distributed allocation of shared resources between dynamic systems. We restrict ourselves to dual based methods that do not require the knowledge of the objective values of the sub-systems. Additionally to the sub-gradient method and ADMM, we apply the augmented Lagrangian based algorithm for distributed non-convex optimization (ALADIN) and adapt it so that it can also handle overarching inequality constraints as they arise for systems that share finite amounts of resources.

The challenge in solving these type of problems is that significant change in the arc structure, i.e. changes in the active sets of the constraints of the sub-systems for small changes in the dual variables occur during the iterations of the dual-based distributed optimization methods.

We present a heuristic approach for the solution of such resource allocation problems for dynamic systems with terminal constraints and free final times, such that the different sub-systems meet the constraints in the presence of overarching resource constraints.

1.2 Notation

Sub-system related vectors \(x_{i}\in \mathbb {R}^{n_{i}},\ i\in 1,\ldots ,N\), are stacked into one large vector \(x = [x_{1}^{T},x_{2}^{T},\ldots ,x_{N}^{T} ]^{T} \in \mathbb {R}^{n}\) of variables on the system level. The subscript i is used as indicator for the different sub-systems throughout the paper. The superscript (k) indicates the current iteration k. To index the j-th element of a vector x, the notation x[j] is used.

The Eucledian norm of a vector x is indicated by \(\left\Vert x\right\Vert _{2}\). The infinity norm of a vector \(x \in \mathbb {R}^{m}\) is defined by \(\left\Vert x\right\Vert _{\infty } = \max \left\{ \left| x[1]\right| ,\ldots ,\left| x[m]\right| \right\}\), where \(\left| \cdot \right|\) is the absolute value of a scalar and the \(\max\) operator followed by \(\{\cdot \}\) indicates the element-wise maximum of the set.

2 Theoretical background

First, a general mathematical problem formulation for a single dynamic system is presented, which is then extended to several systems that share a common resource. The extended problem is put into a standard form and algorithms for the distributed solution of the problem are introduced.

2.1 Problem formulation for a sub-system trajectory

A general dynamic or trajectory optimization problem over a fixed time interval \(t\in [t_{0},t_{f}]\) can be written as follows:

$$\begin{aligned} \min _{\begin{array}{c} u(t)\\ \forall t \in [t_{0},t_{f}] \end{array}}&\quad \left( \varUpsilon \left( \chi (t_{f})\right) + \int _{t_{0}}^{t_{f}} \varTheta \left( \chi (t),u(t),t \right) dt \right) ,&\end{aligned}$$
(1a)
$$\begin{aligned} \mathrm {s.t.}&\quad \dot{\chi }(t) = F\left( \chi (t),u(t),t \right) , \quad t \in [t_{0},t_{f}],\ \end{aligned}$$
(1b)
$$\begin{aligned}&\quad \chi (t_{0})=\chi _{0}, \quad \end{aligned}$$
(1c)
$$\begin{aligned}&\quad P\left( \chi (t),u(t),t \right) \le 0, \quad t \in [t_{0},t_{f}],\end{aligned}$$
(1d)
$$\begin{aligned}&\quad T\left( \chi (t_{f}) \right) \le 0. \end{aligned}$$
(1e)

In dynamic optimization, there is a distinction between state variables \(\chi (t)\) and inputs u(t) to the system. From the inputs u(t) and the initial condition, \(\chi (t_{0}) = \chi _{0}\), the state variables can be computed using the model equation Eq. 1b. Thus u(t) are the degree of freedom, while \(\chi (t)\) are the dependent variables. The goal is to find the best inputs such that the constraints are satisfied and some performance measure of the resulting trajectory is maximized or minimized.

The objective is given in the Bolza form which consists of a scalar performance measure at the end of the horizon, \(\varUpsilon\), and an integral part that tracks some scalar performance measure over the whole path of the trajectory, \(\varTheta\). Similar to the objective, the constraints are defined as terminal constraints (\(T_{i}\)) and path constraints (\(P_{i}\)) (Sargent 2000).

2.2 Problem formulation for multiple trajectories with shared inputs

The problem of interest in this paper is to optimize N trajectories for N sub-systems that share resources and start as well as end their operation possibly at different times. This can be formulated as the following dynamic optimization problem:

$$\begin{aligned} \min _{\begin{array}{c} u_{i}(t)\\ \forall (t,\ i)\ \in \ ([t_{min},t_{max}],\ \{1,\ldots ,N\}) \end{array}}&\quad \sum _{i\in \{1,\ldots ,N\}}^{} \left( \varUpsilon _i \left( \chi _i(t_{f,i})\right) + \int _{t_{0,i}}^{t_{f,i}} \varTheta _{i} \left( \chi _{i}(t),u_{i}(t),t \right) dt \right) , \end{aligned}$$
(2a)
$$\begin{aligned} \mathrm {s.t.}&\quad \sum _{i\in \{1,\ldots ,N\}}^{} u_i(t) \le u_{shared,max}(t), \quad t \in [t_{min},t_{max}], \end{aligned}$$
(2b)
$$\begin{aligned}&\quad \dot{\chi }_{i}(t) = F_{i}\left( \chi _{i}(t),u_{i}(t),t \right) , \nonumber \\&\quad \quad \quad t \in [t_{0,i},t_{f,i}],\ \forall i \in \{1,\ldots ,N\}, \end{aligned}$$
(2c)
$$\begin{aligned}&\quad \chi _{i}(t_{0,i})=\chi _{0,i}, \quad \forall i \in \{1,\ldots ,N\}, \end{aligned}$$
(2d)
$$\begin{aligned}&\quad P_{i}\left( \chi _{i}(t),u_{i}(t),t \right) \le 0, \nonumber \\&\quad \quad \quad t \in [t_{0,i},t_{f,i}],\ \forall i \in \{1,\ldots ,N\}, \end{aligned}$$
(2e)
$$\begin{aligned}&\quad T_{i}\left( \chi _{i}(t_{f,i}) \right) \le 0, \quad \forall i \in \{1,\ldots ,N\}, \end{aligned}$$
(2f)
$$\begin{aligned}&\quad u_{i}(t) = 0, \quad t \notin [t_{0,i},t_{f,i}], \quad \forall i \in \{1,\ldots ,N\}. \end{aligned}$$
(2g)

The considered time interval is given by \(t_{min}=\min \left\{ t_{0,1},\ldots ,t_{0,N} \right\}\) and \(t_{max}=\max \left\{ t_{f,1},\ldots ,t_{f,N} \right\}\). The objective is to minimize the sum of the individual objectives. The variables \(\chi _{i}\) and \(u_{i}\) belong to sub-system i exclusively, can only be manipulated by the respective sub-system, and, except for coupling via the overarching constraints Eq. 2b, have no impact on the other sub-systems. Due to the different starting and final times of the trajectories, when the trajectory of sub-system i is not active, its use of the resource is fixed at 0 via Eq. 2g.

2.3 Numerical solution methods for trajectory optimization

The problems given in Eqs. 1 and 2 can be solved using different approaches: Direct optimization methods, methods based on Pontryagin’s minimum principle, and methods that are based on the Hamilton–Jacobi–Bellman equations (Bellman 1957; Bertsekas 1995; Pontryagin 2018; von Stryk and Bulirsch 1992). Depending on the selected method, the level of discretization can be chosen: All quantities can be considered infinite-dimensional in time, only the inputs can be discretized, or additionally to the inputs also the states can be fully discretized. If the inputs are discretized, they are usually considered to be piece-wise constant or piece-wise linear within the discretization elements. An overview of the solution methods as well as discretization levels is given in Betts (1998) and Srinivasan et al. (2003).

While there are also other efficient methods, as, e.g., parsimonious input parametrization (Rodrigues and Bonvin 2019), direct methods are used here, since they are best suited to handle the overarching constraints Eq. 2b as well as to get reliable numerical solutions (Srinivasan et al. 2003). When the inputs are discretized into equidistant intervals of duration \(\varDelta t\), there are three options to solve such a problem using the direct method: The first option is control vector parametrization, where the states remain continuously defined for every point in time and are determined by integration. The degrees of freedom for the optimization are the values of the inputs. The second is the simultaneous approach, where, additionally to the inputs, also the states are fully discretized such that a sparse large non-linear program (NLP) results (Biegler 2007). Since also the states are degrees of freedom, only when the NLP has been solved, the resulting trajectory satisfies the model equations. On the other hand, this method often shows to be more robust. The third option is multiple shooting, in which the time is divided into several intervals, where in each interval control vector parametrization is used to determine the solution. Matching of the states at the ends of the intervals is forced by an additional boundary condition (Bock and Plitt 1985). In this contribution, control vector parametrization is applied and a constant stepsize 4th-order Runge-Kutta method is used for integration, because this renders all derivatives, of objective as well as of the constraints, dependent only on the inputs.

2.4 Discretized problem formulation

The discretization of the problem in Eq. 1 can be done as described in the previous subsection, however, for the problem formulation with multiple trajectories and shared inputs, synchronization of time between the different trajectories needs to be assured. Only if the starting and ending times are on the same time grid as the discretization of the inputs, the overarching constraints can be enforced at every point in time. The starting and ending times are thus expressed as multiples of the shared minimum discretization duration \(\varDelta t\) via the following relationship:

$$\begin{aligned} t_{i,0}&= N_{i,0} \ \varDelta t, \end{aligned}$$
(3)
$$\begin{aligned} t_{i,f}&= N_{i,f} \ \varDelta t. \end{aligned}$$
(4)

For each sub-system i, the sets of points \(\varPsi _{i} = \{N_{i,0} ,\ldots , N_{i,f}-1\}\) are defined as counterparts to the continuous time intervals \([t_{0,i},t_{f,i}]\).

Similar to the continuous case, \(N_{min}\) and \(N_{max}\) are defined as the minimum and maximum over all sub-systems i of \(N_{i,0}\) and \(N_{i,f}\). This leads to the following problem formulation:

$$\begin{aligned} \min _{ \begin{array}{c} \chi _{i,p},\ u_{i,p},\\ \forall (p,\ i) \ \in \ (\{N_{min},\ldots ,N_{max}\},\ \{1,\ldots ,N\}) \end{array}}&\quad \sum _{i\in \{1,\ldots ,N\}}^{} \left( \varUpsilon _i(\chi _{i,N_{i,f}}) + \sum _{p\in \varPsi _{i}}^{} \varTheta _{i} \left( \chi _{i,p},u_{i,p},\varDelta t,p\right) \right) , \end{aligned}$$
(5a)
$$\begin{aligned} \mathrm {s.t.}&\quad \sum _{i \in \{1,\ldots ,N\}}^{} u_{i,p} \le u_{shared,max,p}, \nonumber \\&\quad \quad \quad \quad \forall p\in \{N_{min},\ldots ,N_{max}\}, \end{aligned}$$
(5b)
$$\begin{aligned}&\quad \tilde{F}_{i}\left( \chi _{i,p},\chi _{i,p+1},u_{i,p},\varDelta t,p\right) = 0, \nonumber \\&\quad \quad \quad \quad \forall p\in \varPsi _{i}, i \in \{1,\ldots ,N\}, \end{aligned}$$
(5c)
$$\begin{aligned}&\quad \chi _{i,N_{i,0}}=\chi _{0,i}, \quad \forall i \in \{1,\ldots ,N\}, \end{aligned}$$
(5d)
$$\begin{aligned}&\quad P_{i}\left( \chi _{i,p},u_{i,p},\varDelta t,p\right) \le 0, \nonumber \\&\quad \quad \quad \quad \forall p\in \varPsi _{i}, i \in \{1,\ldots ,N\}, \end{aligned}$$
(5e)
$$\begin{aligned}&\quad T_{i}\left( \chi _{i,N_{i,f}} \right) \le 0, \quad \forall i \in \{1,\ldots ,N\}, \end{aligned}$$
(5f)
$$\begin{aligned}&\quad u_{i,p} = 0, \quad \forall p\notin \varPsi _{i}, i \in \{1,\ldots ,N\}. \end{aligned}$$
(5g)

In this formulation, p is the discrete time index. Note that in this case the models \(\tilde{F}_{i}\) are defined implicitly, connecting the old and the new states. This is used to express numerical integration or full discretization. Furthermore, it should be noted that the path constraints \(P_{i}\) are only defined at the grid points.

The resulting optimization problem is non-convex, which is in general a \(\mathscr {NP}\)-hard problem that can have multiple local minima (Esposito and Floudas 2000; Papamichail and Adjiman 2002).

2.5 Problem formulation in the standard form of distributed optimization

The problem of optimizing trajectories with shared resources across system boundaries from Eq. 5 can be written in the standard form of a general sharing problem, cf. Boyd et al. (2010),

$$\begin{aligned} \min _{x_{i},\ \forall i \in \{1,\ldots ,N\}}&\quad \sum _{i \in \{1,\ldots ,N\}}^{} f_i(x_i), \end{aligned}$$
(6a)
$$\begin{aligned} \mathrm {s.t.}&\quad \sum _{i \in \{1,\ldots ,N\}}^{} A_i x_i \le b,\end{aligned}$$
(6b)
$$\begin{aligned}&\quad g_i(x_i)\le 0, \quad i \in \{1,\ldots ,N\}. \end{aligned}$$
(6c)

The variables \(x_{i}\) comprise the inputs and the states of the sub-systems, \(dim (x_{i}) = dim(\chi _{i})+dim(u_{i})\). The inputs of the trajectory optimization problem described in the previous subsection are given by the linear mapping \(u_{i} = A_i x_i\), with \(u_{i} = [u_{i,N_{i,0}},u_{i,N_{i,1}},\ldots ,u_{i,N_{i,f}-1}]^{T}\). The state variables are given by \(\chi _{i} = B_i x_i\), with \(\chi _{i} = [\chi _{i,N_{i,0}},\chi _{i,N_{i,1}},\ldots ,\chi _{i,N_{i,f}}]^{T}\). The initial conditions, model equations, and system-specific constraints are described by the \(n_{g_{i}}\)-dimensional inequality constraint function \(g_{i}\). The dimension of the overarching constraints is m, i.e., \(dim(b)=dim(A_{i} x_{i})=dim(u_{i})=dim(u_{shared,max})=m\).

2.6 Necessary conditions of optimality for distributed optimization

For this problem in standard form, the Lagrangian of the problem is given by Bertsekas (1999):

$$\begin{aligned} \mathscr {L}(x,\lambda ,\mu )&:= \sum _{i \in \{1,\ldots ,N\}}^{} \left( f_i(x_i) \right) + \lambda ^T \left( \sum _{i \in \{1,\ldots ,N\}}^{} A_i x_i - b \right) + \sum _{i \in \{1,\ldots ,N\}}^{} \left( \mu _i^{T} (g_i(x_i)) \right) ,\nonumber \\&=\sum _{i \in \{1,\ldots ,N\}}^{} \left( f_i(x_i) + \lambda ^T A_i x_i - \frac{1}{N} \lambda ^T b + \mu _i^{T} (g_i(x_i)) \right) , \nonumber \\&=\sum _{i \in \{1,\ldots ,N\}}^{} \mathscr {L}_i(x_i,\lambda ,\mu _{i}), \end{aligned}$$
(7)

where \(\lambda\) are the Lagrange multipliers corresponding to the overarching constraints in Eq. 6b and \(\mu _{i}\) are the Lagrange multipliers for the sub-system specific constraints in Eq. 6c. Using the Lagrangian, the first-order necessary conditions of optimality (Karush-Kuhn-Tucker conditions) can be expressed as:

$$\begin{aligned} \nabla _{x_{i}} \mathscr {L}_{i}(x_{i},\lambda ,\mu _{i})&= 0,\quad \forall i \in \{1,\ldots ,N\}, \end{aligned}$$
(8a)
$$\begin{aligned} g_i(x_i)&\le 0,\quad \forall i \in \{1,\ldots ,N\},\end{aligned}$$
(8b)
$$\begin{aligned} \mu _i&\ge 0,\quad \forall i \in \{1,\ldots ,N\},\end{aligned}$$
(8c)
$$\begin{aligned} \sum _{i=1}^{N} A_i x_i - b&\le 0, \end{aligned}$$
(8d)
$$\begin{aligned} \lambda&\ge 0. \end{aligned}$$
(8e)

The interesting property of these conditions is that Eqs. 8a8c can be evaluated independently for each sub-system i and only Eqs. 8d and 8e require coordination between the different sub-systems.

2.7 Distributed solution algorithms based on the dual problem

While the Eqs. 8 can be solved monolithically via state of the art solvers, in this contribution methods that exploit the distributed structure of the problem are investigated.

We focus on hierarchical methods, where all sub-system specific decisions are taken in a distributed fashion and only on the coordination layer, the satisfaction of the overarching constraints Eqs. 8d and 8e is enforced. These methods are also known as dual methods, which make use of the dual variables or Lagrange multipliers. In our case, the dual variables of interest are the ones corresponding to the overarching constraints, i.e. \(\lambda\).

Using the solution to Eqs. 8a8c, written as \(\inf _{x_{i},\ \mu _{i}\ge 0} \mathscr {L}_i(x_i,\lambda ,\mu _{i})\), the dual function can then be defined:

$$\begin{aligned} d(\lambda ) = \inf _{x ,\ \mu \ge 0} \mathscr {L}(x,\lambda ,\mu ) = \sum _{i \in \{1,\ldots ,N\}}^{} \inf _{x_{i} ,\ \mu _{i}\ge 0} \mathscr {L}_i(x_i,\lambda ,\mu _{i}). \end{aligned}$$
(9)

Using this dual function of \(\lambda\), the optimality condition can be expressed as finding the maximum of \(d(\lambda )\) with \(\lambda \ge 0\), which is called the dual problem:

$$\begin{aligned} \max _{\lambda \ge 0} d(\lambda ). \end{aligned}$$
(10)

Due to the infimum in Eq. 9, the dual function is in general not known explicitly. However, according to Danskin’s theorem (Bertsekas 1999), a sub-gradient can be determined at \(\lambda\) via:

$$\begin{aligned} \partial d(\lambda ) = \partial \sum _{i \in \{1,\ldots ,N\}}^{} \inf _{x_{i},\ \mu _{i}\ge 0} \mathscr {L}_i(x_i,\lambda ,\mu _{i}) = \sum _{i \in \{1,\ldots ,N\}}^{} A_i x_i(\lambda ) - b. \end{aligned}$$
(11)

In this contribution, we compare iterative methods for the maximization of the dual which do not require the explicit knowledge of the value of the dual function and thus of the different individual objectives. As measures of convergence, we define two criteria. The primal feasibility, \(\varPhi _{Primal}\), is a measure of the satisfaction of the overarching constraints of the original problem. At the same time, due to Eq. 11 and the fact that the dual is always concave, it is also a measure of the vanishing of the gradient (Boyd and Vandenberghe 2004).

$$\begin{aligned} \varPhi _{Primal}[j]&= \max \left\{ 0, \left( \sum _{i \in \{1,\ldots ,N\}}^{} A_i x_i(\lambda ) - b\right) [j] \right\} , \quad j \in \{1,\ldots ,m\}. \end{aligned}$$
(12)

Here we highlight that \(x_i\) is a function of \(\lambda\). If all elements of \(\varPhi _{Primal}\) are equal to 0, the overarching constraints are satisfied. Additionally to the primal feasibility, which measures the satisfaction of the constraints, also a measure of convergence is required, in order to prevent termination when a solution is primal feasible, but not optimal yet. The dual feasibility, \(\varPhi _{Dual}\), can be interpreted as the satisfaction of the optimality criterion for the dual problem, i.e., the gradient approaching the 0 vector. Thus we define the dual feasibility as the finite difference approximation of the gradient of the dual function.

$$\begin{aligned} \varPhi _{Dual}[j]&=\frac{ \left| \lambda ^{+}[j] - \lambda [j] \right| }{ \alpha [j] }, \end{aligned}$$
(13)

This second feasibility criterion is a measure of how far the solution can deviate from active overarching constraints and is essential for inequality constrained problems, since primal feasibility can always be achieved by sufficiently large Lagrange multipliers. This ensures that the solution not only satisfies \(\sum _{i=1}^{N} A_i x_i^{(k)} - b \le \epsilon\) but also \(\sum _{i=1}^{N} A_i x_i^{(k)}[j] - b[j] \ge - \epsilon\) for all active overarching constraints j. Only if a solution is primal and dual feasible, a saddle point to the Lagrangian that satisfies the conditions of optimality is found.

Since we can optimize numerically only with a certain numerical error, we define a set of \(x^{*},\lambda ^{*}\), and \(\mu ^{*}\) to be optimal if the following holds:

$$\begin{aligned} \{x^{*},\lambda ^{*}, \mu ^{*}\} = \{ x,\lambda , \mu \, \mid \, \left\Vert \varPhi _{Primal}\right\Vert _{\infty } \le \epsilon _{Feas,Primal} \wedge \left\Vert \varPhi _{Dual}\right\Vert _{\infty } \le \epsilon _{Feas,Dual} \}, \end{aligned}$$

where \(\epsilon _{Feas,Primal}\) and \(\epsilon _{Feas,Dual}\) are the desired numerical tolerances and \(x_i\) minimizes the objective of sub-system i.

2.7.1 Sub-gradient method

The simplest method for the maximization of the dual is to follow the direction of the steepest ascent, i.e., to use the direction of the sub-gradient (Shor 2012). Here the challenge is the selection of a suitable stepsize. Since the dual function may be a non-smooth function, which depends on the solution structure of the different trajectories, the stepsize selection criteria for non-smooth optimization derived in Nesterov (2004, p. 142) should be satisfied.

$$\begin{aligned}&\alpha ^k > 0, \end{aligned}$$
(14a)
$$\begin{aligned}&\alpha ^k \rightarrow 0, \end{aligned}$$
(14b)
$$\begin{aligned}&\sum _{k=0}^{\infty } \alpha ^k = \infty . \end{aligned}$$
(14c)

Since the impact of small changes in the dual variables is not the same for all variables \(\lambda [j],\ j\in \{1,\ldots ,m\}\), a scheme is needed that adapts the stepsizes individually.

While there exist methods to evaluate the optimal stepsize at the current point using the Lipschitz constant, as explained in Bertsekas and Tsitsiklis (1989) and Kozma et al. (2014), determining the constant is difficult since the dual cannot be evaluated explicitly. Furthermore, since this constant is not valid globally, the stepsizes either have to be adapted during the maximization of the dual or be chosen conservatively enough to be valid throughout the domain of the dual function.

The sub-gradient method can be considered as alternating local optimization of the sub-systems and adaptation of the Lagrange multipliers on the coordination layer. We propose to decrease the stepsize of a specific overarching constraint every time the sign of the corresponding element of \(\sum _{i=1}^{N} A_i x_i(\lambda ) - b\) changes. This is equivalent to the inequality constraint becoming active or inactive respectively. Specifically the following adaptation of \(\alpha\) is used (cf. Algorithm 1, line 8):

$$\begin{aligned} \alpha ^{(k)}[j] = {\left\{ \begin{array}{ll} \gamma _{Decrease} \ \alpha ^{(k-1)}[j] , &{} \text {if} ,\ \left( \sum _{i=1}^{N} A_i x_i^{(k)} - b \right) [j] \left( \sum _{i=1}^{N} A_i x_i^{(k-1)} - b \right) [j] \le 0, \\ \alpha ^{(k-1)}[j] , &{} \text {otherwise}. \end{array}\right. } \end{aligned}$$
(15)

If one of the stepsizes is too large and it has an influence on the Lagrange multiplier (note that if a constraint is inactive, the multiplier is fixed at 0), then this leads inevitably to a diminishing of the stepsize in this direction. The factor \(\beta\) in Algorithm 1 prevents the stepsize from decreasing too fast due to oscillating responses while maintaining the stepsize as large as possible if consecutive steps have the same direction.

figure a

Other possible selections of \(\alpha\) can be found in Nesterov (2004). Regardless of the selection of the stepsize, the provable convergence rate for strictly convex problems is at best \(\mathscr {O}(1/k)\).

2.7.2 ADMM

A more robust method is the alternating direction method of multipliers (ADMM) Boyd et al. (2010). It uses the augmented Lagrangian:

$$\begin{aligned} \mathscr {L}_{\rho ,i}(x_i,\lambda ,\mu _{i},z_i) = \mathscr {L}_i(x_i,\lambda ,\mu _{i}) + \frac{\rho }{2} \left\Vert A_{i} x_{i} - z_{i} \right\Vert _{2}^{2}. \end{aligned}$$
(16)

Additionally to the linear penalty term, the deviation from a feasible use of the shared resources \(z_{i}\) is penalized quadratically. Essentially, the penalty terms convexify the problems around points that satisfy the overarching constraints, which accelerates the initial convergence. The variables \(z_{i}\) are determined on the coordinator level and are a projection of the current responses of the different sub-systems onto the feasible region. The stepsize \(\alpha\) for the update of the Lagrange multipliers is \(\frac{\rho }{N}\) in the case of ADMM. However, since additionally to the prices also the \(z_{i}\) determine the response of the systems, the dual feasibility is redefined as:

$$\begin{aligned} \varPhi _{Dual}[j] = \rho \left( \sum _{i=1}^{N} \left| A_i x_i^{(k)}[j] - z_{i}^{(k)}[j]\right| \right) . \end{aligned}$$
(17)

The convergence rate for convex problems is also \(\mathscr {O}(1/k)\) (Hong and Luo 2017; Kozma et al. 2014).

figure b
figure c

In Algorithm 2 the unscaled version of ADMM for equality constrained sharing problems is shown, cf. Boyd et al. (2010, p. 59). When adapting the algorithm to the inequality constraint problem considered here, the dual variables need to satisfy \(\lambda \ge 0\) and the z-variables need to be adjusted depending on whether the overarching constraints are active or not. When they are active, the same update as in Algorithm 2 applies, however, if the constraints are not active, the references \(z_{i}\) are based on the previous solutions of the sub-systems. This penalty is required since otherwise only some variables are quadratically penalized possibly leading to cyclic solution changes in subsequent iterations. We present ADMM including a new and efficient update step to compute the z-variables for inequality constrained problems in Algorithm 3. To improve convergence, different and variable penalty parameters \(\rho\) are used for each constraint. The penalty parameters are adapted at step 10 of Algorithm 3 using the scheme in Wang and Liao (2001):

$$\begin{aligned} \rho ^{(k)}[j] = {\left\{ \begin{array}{ll} \tau _{Incrase} \ \rho ^{(k-1)}[j] , &{} \text {if}\ \varPhi _{Primal}^{(k)}[j] \ge \delta \ \varPhi _{Dual}^{(k)}[j], \\ \tau _{Decrease} \ \rho ^{(k-1)}[j] , &{} \text {if} \ \delta \ \varPhi _{Primal}^{(k)}[j] \le \varPhi _{Dual}^{(k)}[j], \\ \rho ^{(k-1)}[j] , &{} \text {otherwise}. \end{array}\right. } \end{aligned}$$
(19)

The factor \(\delta\) is the maximally allowed difference between primal and dual feasibility before the penalty parameter is adapted to rebalance the proportion. The parameters \(\tau _{Incrase} > 1\) and \(\tau _{Decrease} < 1\) adjust the penalty parameter \(\rho\) if necessary. The update ensures that primal and dual feasibility are kept in balance or, in other terms, that far away from the optimum large changes in \(\lambda\) are made. Close to the optimum, the changes are reduced and additionally the quadratic penalization is decreased, to ensure that the solution without the quadratic penalty also satisfies the overarching constraints.

2.7.3 ALADIN

Another method that uses the augmented Lagrangian is the augmented Lagrangian based algorithm for distributed non-convex optimization in Houska et al. (2016). Different to ADMM, here the reference values for all variables of sub-system i are penalized for deviating from the reference \(z_{i}\):

$$\begin{aligned} \mathscr {L}_{\rho ,i}(x_i,\lambda ,\mu _{i},z_i) = \mathscr {L}_i(x_i,\lambda ,\mu _{i}) + \frac{\rho }{2} \left\Vert x_{i} - z_{i} \right\Vert _{2}^{2}. \end{aligned}$$
(20)

Additionally to reporting the consumption of the shared resources, the sub-systems also report their derivatives of the objective and of the active constraints with respect to the local decision variables to the coordination layer in each iteration k. The Hessian and gradient approximations are then calculated using the constraint Jacobian information \(\mathscr {C}_{i}^{(k)}\) (\(\mathscr {C}_{i}^{Active(k)}\)) of the (active) constraints from Eq. 6c:

$$\begin{aligned} \mathscr {C}_{i}^{Active(k)}[l,:] = {\left\{ \begin{array}{ll} \mathscr {C}_{i}^{(k)}[l,:], &{}\, \text {if } g_{i}(x_i^{(k)})[l]=0 ,\\ 0, &{}\, \text {otherwise,} \end{array}\right. } \quad \text {for } l\in \{1,\ldots ,n_{g_{i}}\}, \forall i\in \{1,\ldots ,N\}, \end{aligned}$$
(21)

where \(\mathscr {C}_{i}^{(k)} = \nabla _{x_{i}} g_{i} (x_{i})|_{x_{i}=x_{i}^{(k)}}\). The modified gradient and Hessian approximation are:

(22)
$$\begin{aligned} \mathscr {H}_{i}^{(k)} \approx&\left. \nabla _{x_{i}}^{2} \left( f_{i} (x_{i}) + \mu _{i}^{T} g_{i}(x_i) \right) \right| _{x_{i} = x_{i}^{(k)},\ \mu _{i} = \mu _{i}^{(k)}} . \end{aligned}$$
(23)

With this information of the sub-systems, instead of doing straight projections onto the feasible set, prices and reference values (\(z_{i}\)) are determined via a quadratic program that approximates the objective functions and active constraints of the sub-systems. The algorithm can be seen as a combination of sequential quadratic programming (SQP) and ADMM. The benefit of using more information from the different sub-systems on the coordinator level is in general a faster convergence. In Houska et al. (2016) it is shown that in theory super-linear to quadratic convergence rates are possible.

figure d

The update of the Lagrange multipliers \(\lambda\) is done differently compared to the previous two methods without sub-gradient information. Instead, the Lagrange multipliers from the overarching constraints \(\lambda _{QP}\) in the QP are used in the update step. The algorithm for equality constrained problems as proposed in Houska et al. (2016) is given in Algorithm 4. The dual feasibility is defined as follows:

$$\begin{aligned} \varPhi _{Dual}[j] = \rho \left( \sum _{i=1}^{N} \left| x_i^{(k)}[j] - z_{i}^{(k)}[j]\right| \right) . \end{aligned}$$
(25)

The parameters \(\alpha _{i} \in [0,1]\) can be used to adjust the behaviour of the algorithm to match frequent changes of the active set. Houska et al. (2016) provide an additional scheme that utilizes the objective values of the sub-systems, based on which these parameters can be adapted in each iteration to guarantee the convergence to a local minimum. The scheme is not considered in this work, because essentially a monolithic optimization is carried out to determine the parameters.

Since trajectory optimization problems with overarching inequality constraints are considered, Eq. 24b is changed to an inequality constraint and the algorithm is modified accordingly. The solution to trajectory optimization problems consists of different arcs, which correspond to active constraints. Since these constraints ultimately act on the inputs, Eq. 24c fixes all \(\varDelta z_{i}\) for the inputs in the QP. Therefore, the equality constraint Eq. 24c is changed to inequality, such that the variables \(\varDelta z_{i}\) are degrees of freedom. If the reference variables \(z_{i}\) resulting from the QP are infeasible, the Hessian of the sub-systems may become indefinite, which occurs mainly at the beginning of the scheme, when the QP approximations of the active constraints do not reflect the actual solution. Positive definiteness of \(\mathscr {H}_{i}^{(k)}\) is required for ALADIN (Houska et al. 2016) and therefore we propose the following strategy to enforce this condition: The elements on the main diagonal are increased by \(\kappa I\), where I is the identity matrix. Since having high values for \(\kappa\) penalizes large changes in \(\varDelta z_{i}\), this value is decreased with the number of iterations until \(\mathscr {H}_{i}^{(k)}\) becomes indefinite. Then \(\kappa\) remains fixed at this value, or is increased in subsequent iterations, if the Hessian is still indefinite.

Another challenge that arises from trajectory optimization is that small changes in the Lagrange multipliers of the overarching constraints \(\lambda\) can change the active set of the sub-systems. In order to be able to reach the correct active set, the stepsize of the algorithm possibly has to be infinitesimal. Thus, also the \(\alpha _{i}\) are adapted in each iteration. Eq. 24b is modified to account for smaller values of \(\alpha _{1}\), such that the new reference variables \(z_{i}\) are always feasible according to the coordinator level QP. In Algorithm 5, the different steps of ALADIN, adjusted to inequality constrained problems, are given.

figure e

2.7.4 Other methods

There are a variety of other dual based methods. For instance the ones introduced in Maxeiner and Engell (2020) or Wenzel et al. (2016), where, similar to the sub-gradient method, only Lagrange multipliers and the usage of the shared resources are exchanged. However, these methods are designed for equality constrained shared resource allocation problems.

Other methods, e.g. those presented in Kozma et al. (2014) and Nesterov (2004), use the objective values of the individual sub-systems. In this contribution, we focus on the presented methods, since they do not require the knowledge of the objective values of the individual sub-systems. Hence, they can be applied to solve problems where due to confidentiality the profit of the sub-systems cannot be openly communicated.

3 Problem formulation for multiple trajectories with shared inputs and free final times

Due to the overarching constraint on the sharing of the resources, the terminal states of the trajectories change. With fixed final times, boundary conditions on the terminal state which are infeasible due to the overarching resource-sharing constraints, cannot be satisfied. Thus, additional degrees of freedom that enable the satisfaction of the terminal constraints, i.e., free final times, are needed.

The standard approach to include the final time as an optimization variable in trajectory optimization is time scaling. The time horizon is scaled to the interval [0, 1], discretized, and multiplied with the final time which is a continuous variable that is minimized. The number of discretization intervals stays constant, but the lengths of the discretization intervals change. The downside of this approach, for the considered scenario with sharing of resources, is that the constraints on the shared resources cannot be enforced exactly anymore, because the discretization intervals are not synchronized between the sub-systems.

Another possibility is to adjust the number of intervals. Then, the length of the discretization intervals is fixed and the shared resource constraints can be enforced across all systems. As a result, the additional optimization variable, the number of discrete intervals, is of integer type (Van den Broeck et al. 2011).

In the following, we use the latter approach and consider a single terminal constraint for each sub-system that is feasible without the overarching constraint but not necessarily when it is present. The resulting problem of trajectory optimization with shared resources and free final times for the different trajectories can be written as the following mixed-integer non-linear program (MINLP):

$$\begin{aligned} \min _{ \begin{array}{c} \chi _{i,p} ,\ u_{i,p},\\ (y_{i,p},\ \hat{y}_{i,p})\ \in \ \{0,1\},\\ \forall (p,\ i) \ \in \\ (\{N_{min},\ldots ,N_{max}\},\ \{1,\ldots ,N\}) \end{array}}&\quad \sum _{i \in \{1,\ldots ,N\}}^{} \sum _{p\in \mathbb {N}}^{} \left( \hat{y}_{i,p} \varUpsilon _i(\chi _{i,p}) + y_{i,p} \varTheta _{i} \left( \chi _{i,p},u_{i,p},\varDelta t,p\right) \right) , \end{aligned}$$
(27a)
$$\begin{aligned} \mathrm {s.t.}&\quad \sum _{i \in \{1,\ldots ,N\}}^{} u_{i,p} \le u_{shared,max,p}, \nonumber \\&\quad \quad \quad \quad \forall p\in \{N_{min},\ldots ,N_{max}\}, \end{aligned}$$
(27b)
$$\begin{aligned}&\quad \tilde{F}_{i}\left( \chi _{i,p},\chi _{i,p+1},u_{i,p},\varDelta t,p\right) =0, \nonumber \\&\quad \quad \quad \quad \forall p\in \{p'|y_{i,p'}=1\}, i \in \{1,\ldots ,N\},\end{aligned}$$
(27c)
$$\begin{aligned}&\quad \chi _{i,N_{i,0}}=\chi _{0,i}, \quad \forall i \in \{1,\ldots ,N\},\end{aligned}$$
(27d)
$$\begin{aligned}&\quad P_{i}\left( \chi _{i,p},u_{i,p},\varDelta t,p\right) \le 0, \nonumber \\&\quad \quad \quad \quad \forall p\in \{p'|y_{i,p'}=1\}, i \in \{1,\ldots ,N\},\end{aligned}$$
(27e)
$$\begin{aligned}&\quad T_{i}\left( \chi _{i,p} \right) \le y_{i,p} M , \nonumber \\&\quad \quad \quad \quad \forall p\in \{N_{min},\ldots ,N_{max}\}, i \in \{1,\ldots ,N\},\end{aligned}$$
(27f)
$$\begin{aligned}&\quad T_{i}\left( \chi _{i,p} \right) \ge (y_{i,p}-1) M + 1/M , \nonumber \\&\quad \quad \quad \quad \forall p\in \{N_{min},\ldots ,N_{max}\}, i \in \{1,\ldots ,N\},\end{aligned}$$
(27g)
$$\begin{aligned}&\quad p-N_{i,0} \le y_{i,p} M , \nonumber \\&\quad \quad \quad \quad \forall p\in \{N_{min},\ldots ,N_{max}\}, i \in \{1,\ldots ,N\},\end{aligned}$$
(27h)
$$\begin{aligned}&\quad p-N_{i,0} \ge (y_{i,p}-1) M + 1/M , \nonumber \\&\quad \quad \quad \quad \forall p\in \{N_{min},\ldots ,N_{max}\}, i \in \{1,\ldots ,N\},\end{aligned}$$
(27i)
$$\begin{aligned}&\quad z_{i,p+1} \ge y_{i,p}-y_{i,p+1} , \nonumber \\&\quad \quad \quad \quad \forall p\in \{N_{min},\ldots ,N_{max}\}, i \in \{1,\ldots ,N\},\end{aligned}$$
(27j)
$$\begin{aligned}&\quad z_{i,p+1} \le y_{i,p}+y_{i,p+1}, \nonumber \\&\quad \quad \quad \quad \forall p\in \{N_{min},\ldots ,N_{max}\}, i \in \{1,\ldots ,N\},\end{aligned}$$
(27k)
$$\begin{aligned}&\quad z_{i,p+1} \le 1-y_{i,p+1}, \nonumber \\&\quad \quad \quad \quad \forall p\in \{N_{min},\ldots ,N_{max}\}, i \in \{1,\ldots ,N\},\end{aligned}$$
(27l)
$$\begin{aligned}&\quad u_{i,p} = 0, \quad \quad \forall p\in \{p'|y_{i,p'} = 0\}, i \in \{1,\ldots ,N\}. \end{aligned}$$
(27m)

The binary variables \(y_{i,p}\) and \(\hat{y}_{i,p}\) are used for each sub-system i to indicate in which intervals \(p\) the trajectory is active (\(y_{i,p}\)) and in which interval \(p\) the trajectory satisfies the terminal constraints (\(\hat{y}_{i,p}\)). The connection between continuous and binary variables is done using the Big M method (Nemhauser and Wolsey 1988).

Different to the problem given in Eqs. 5, \(N_{max}\) is not the maximum of \(N_{i,f}\) but must be a sufficiently large integer such that the problem is feasible, i.e., that all trajectories can satisfy the terminal constraint when the resources are shared.

The problem given in Eqs. 27 can be solved monolithically using an MINLP solver. Since in this contribution distributed solutions are sought, another way to handle the binary variables is to use the heuristic described in Van den Broeck et al. (2011), where the minimum interval for which the terminal constraint is satisfied is found iteratively. The iterative process of determining the final interval is included into the scheme for the satisfaction of the overarching constraints.

Each sub-system updates its number of intervals via the following equation:

$$\begin{aligned} N_{i,f} = {\left\{ \begin{array}{ll} N_{i,f} + 1, &{}\, \text {if } T_{i}\left( \chi _{i,N_{i,f}} \right) > 0, \\ N_{i,f} - 1, &{}\, \text {if } T_{i}\left( \chi _{i,N_{i,f}-1} \right) \le 0, \\ N_{i,f}, &{}\, \text {otherwise.} \end{array}\right. } \quad \forall i\in \{1,\ldots ,N\}. \end{aligned}$$
(28)

The adaptation is done if at least one of the terminal constraints cannot be reached with the given final times. As an additional criterion, the number of iterations between two adaptations is increased by 1 each time the final times are adapted.

As the changes in the discrete variables become less frequent, the influence of their adaptation vanishes and the distributed optimization methods converge.

Since the objectives are in general not smooth with respect to discrete changes of the final time, the objective functions of the sub-systems should be scaled with their respective final times, i.e. by dividing the objective by the final time, in order to reduce the effect of the discrete changes.

4 Semi-batch reactor case study

In this paper, we consider a modified version of the isothermal semi-batch reactor with a safety constraint example from Ubrich et al. (1999). This reactor has been widely used as a benchmark in the trajectory optimization literature, e.g., Srinivasan et al. (2003).

A first-order reaction is considered, in which the following reaction occurs:

$$\begin{aligned} A+B\rightarrow C. \end{aligned}$$

Previous to the reaction, the reactor is filled with the amount \(V_{0,i}\ c_{A,0,i}\) of reactant A. The dosage profile of reactant B is a degree of freedom and hence the feed rates \(u_{i}(t)\) are the manipulated variables. The ordinary differential equations that describe the trajectories of the states in each reactor i are:

$$\begin{aligned} \dfrac{d c_{A,i}(t)}{d t}&= - k \, c_{A,i}(t) \, c_{B,i}(t) - \dfrac{u_{i}(t)}{V_{i}(t)} c_{A,i}(t), \end{aligned}$$
(29)
$$\begin{aligned} \dfrac{d c_{B,i}(t)}{d t}&= - k \, c_{A,i}(t) \, c_{B,i}(t) - \dfrac{u_{i}(t)}{V_{i}(t)} \left( c_{B,in,i} - c_{B,i}(t) \right) , \end{aligned}$$
(30)
$$\begin{aligned} \dfrac{d V_{i}(t)}{d t}&= u_{i}(t). \end{aligned}$$
(31)

The trajectories of each reactor i need to satisfy the following constraints:

  • Limitation of the feed rate of the reactant B by \(u_{Max,i}\),

  • The path constraint that the adiabatic temperature rise in the reactor is limited

    $$\begin{aligned} \varDelta T_{ad}(t) = \min \left\{ c_{A,i}(t), c_{B,i}(t) \right\} \frac{(-\varDelta H_R)}{\rho \, c_p}, \end{aligned}$$
    (32)

    which poses a constraint on the concentration \(c_{B,i}(t)\) within the reactor,

  • The path constraint on the volume of the reactor, which requires that the reaction volume cannot exceed the maximum available reactor volume \(V_{Max,i}\).

As a terminal constraint, the amount of C must be above the desired threshold \(n_{C,Des,i}\). This amount can be calculated via \(n_{C,i}(t) = c_{A,0,i} V_{0,i} - c_{A,i}(t) V_{i}(t)\). In Table 1, all case study specific numerical values are given.

Table 1 Case study specific parameters for all reactors i

In trajectory optimization, different criteria can be selected as economically motivated objectives, e.g.:

  • Maximization of the valuable product at the end of the batch time, which yields the maximum material efficiency.

  • Minimization of the time necessary to produce a certain amount of valuable product, which yields as many batches as possible per time.

  • Maximization of productivity, i.e., the amount of valuable product divided by the batch time.

Here, the maximization of the throughput of product C is chosen as the optimization criterion for each reactor i:

$$\begin{aligned} \varUpsilon _i(\chi _{i,N_{i,f}})&= -\frac{n_{C,i,f}}{t_{i,f}} = -\frac{c_{A,i,0} V_{0,i} - c_{A,i,f} V_{i,f} }{(N_{i,f} - N_{i,0}) \varDelta t}. \end{aligned}$$
(33)
Fig. 1
figure 1

Optimal trajectories for a single semi-batch reactor without overarching constraints for a time discretization of \(\varDelta t=4\) h. On the left, the evolution of the states \(c_{A,1}, c_{B,1}\) and \(V_{1}\) as well as of the amount of final product \(n_{C,1}\) is shown. On the right, the corresponding input profile is shown. The thin lines correspond to the constraints on the variables

In Fig. 1 the optimal trajectories of the states and the input of a single semi-batch reactor without overarching constraints are shown for an input discretization interval of 4 h. On the left, the trajectories of the states are shown and the trajectory of the amount of final product is shown. The effective constraints on the quantities are indicated by the thin horizontal lines in the same line style. On the right, the input trajectory is shown. One can see the presence of different arcs, i.e., the presence of different active constraints. At first, the maximum feed rate constraint is active, then the temperature at cooling failure limits the feed rate until the feeding needs to be stopped because the maximum volume of the reaction mixture is reached.

In the case of a single semi-batch reactor, the terminal constraint is satisfied after 17 discrete elements of duration \(\varDelta t = 4\) h.

Additionally to the individual limitations of the feed flows, we consider a coupling of the reactors via an overarching constraint on the joint feed flow rate of the reactant B:

$$\begin{aligned} \sum _{i \in \{1,\ldots ,N\}}^{} u_{i,p} \le u_{shared,max}, \quad \forall p\in \{N_{min},\ldots ,N_{max}\}. \end{aligned}$$
(34)

The maximum combined feed flow rate for all reactors is considered to be constant. Dualizing this overarching constraint Eq. 34 according to Eq. 7 yields the following integral cost term in the objectives of the sub-systems:

$$\begin{aligned} \varTheta _{i} \left( \chi _{i,p},u_{i,p},\varDelta t,p\right)&= \lambda [p] u_{i,p}, \quad \forall p\in \{N_{min},\ldots ,N_{max}\}. \end{aligned}$$
(35)

4.1 Validation of the solutions

All subsequent distributed solutions are compared to the monolithic solutions of the same problem. In the case of free final times, instead of solving the MINLP, all possible solutions for the final interval \(N_{i,f}\) that end no later than 3 intervals from the unconstrained solution are enumerated and used as the benchmark for the evaluation of the different distributed solutions.

For the iterative methods with the augmented Lagrangian term, upon convergence, the solutions are validated with \(\rho = 0\) in order to ensure that the constraints are satisfied even without penalty parameters and thus the corresponding Lagrange multipliers satisfy the necessary conditions of optimality.

5 Numerical results

The performance of the different methods is evaluated using the following three criteria:

  • required number of iterations,

  • evolution of the primal infeasibility over the iterations,

  • objective value at convergence.

All scenarios were evaluated using the optimization parameters given in Table 2, which were determined empirically. The optimization problems are solved in Python using IPOPT as NLP solver and the CasADi toolbox for the computation of the derivatives (Andersson et al. 2018; Wächter and Biegler 2006).

Table 2 Parameters for the different distributed optimization methods

In the following, all reactors have the same properties and initial conditions, which is not required in general. Different scenarios are generated by changing the number of reactors, the time discretization \(\varDelta t\), and the starting times of the reactors. Three reactors starting at \(0\ \varDelta t\), \(1\ \varDelta t\), and \(2\ \varDelta t\) are indicated by the starting sequence [0, 1, 2].

5.1 Comparison of the methods for fixed final times

In the following, first the results for fixed final times are discussed. Since changing the fixed final time has only a minor influence on the arc structure of the solution as long as feeding is completed within the considered time horizon, it is not varied.

At first, different scenarios that are generated by varying the starting times of the reactors are considered. Since it is possible to generate scenarios without active overarching constraints, which do not require any coordination, only scenarios where the overarching input constraint is active in at least two intervals are considered.

In Fig. 2 the final distribution of the input between the different sub-systems and the corresponding trajectories of the states are shown for three reactors starting at the same time, i.e. [0, 0, 0]. The input is discretized into piece-wise constant intervals of \(\varDelta t = 4\) h and the final times for all of the reactors are fixed at \(20\ \varDelta t\). In Fig. 2b, the inputs \(u_{i}\) are stacked on top of each other. Input \(u_{1}\) is the difference between \(\bar{u}_{1}\) and the baseline at 0, input \(u_{2}\) is the difference between \(\bar{u}_{2}\) and \(\bar{u}_{1}\), and \(u_{3}\) is the difference between \(\bar{u}_{3}\) and \(\bar{u}_{2}\). This plot shows how the resources are distributed between the different sub-systems over time and that the shared resource constraint can be satisfied. This is a special case due to the same starting time and the equal distribution of the feedrate (\(u_{i}\)), wherefore all trajectories in response to the prices are the same. The corresponding state profile is displayed in Fig. 2a. It is worth mentioning that the structure of the optimal solutions of the sub-systems (reactors) changes in the distributed optimization. As the maximum feed rate is no longer reached, the first arc now is a sensitivity seeking arc, and the constraint on the adiabatic temperature becomes only active at 44 h. So for most of the batch time, the solution of the subsystem problems is not at the constraints, the constraint on the feed is dualized and only enforced by the coordination via the price of the feed.

Fig. 2
figure 2

Optimal input trajectories for three reactors starting at [0, 0, 0] (right). Resulting trajectories of the states and the amount of product C for reactors 1, 2, and 3, which are equal due to the same starting time and equal distribution of the input (left)

Fig. 3
figure 3

Evolution of the Lagrange multipliers \(\lambda ^{(k)}[j], j\in \{1,\ldots ,m\}\) for three reactors starting at [0, 0, 0] for the different methods to maximize the dual over the iterations k

Figure 3 shows the evolution of the Lagrange multipliers corresponding to the overarching constraints on the shared resources in the maximization of the dual for the three different methods. In Fig. 3a, the spikes ate the beginning of the evolution of the Lagrange multipliers for the sub-gradient method result from the adaptation of the stepsizes to the specific problem. Once the stepsizes are adequate, the prices converge to the values of the monolithic optimization. For ADMM, the prices converge more quickly towards the optimal ones compared to the sub-gradient method, however, the increasing number of active overarching constraints as well as the balancing of primal and dual feasibility result in oscillations towards the optimal Lagrange multipliers, as can be seen in Fig. 3b. In Fig. 3c, the prices are adapted according to the ALADIN method, which is based on the derivatives at the currently active set of local inequalities, i.e., the active input and path constraints of all reactors. After a few iterations, the approximated active set is close to the actual one, and the prices converge towards the optimal Lagrange multipliers quickly.

Fig. 4
figure 4

Resulting distribution of the resources for three reactors starting at [0, 0, 2]

Another interesting scenario that is further examined results for the starting times [0, 0, 2]. The optimized input trajectories are shown for the different methods in Fig. 4. It can be seen that the shared resource constraint, indicated by the dashed line is satisfied for all methods, however, not all methods yield the same trajectories. Nonetheless, the objective values agree up to the 5th significant digit. Thus, the difference in the trajectories can be explained as different local optima that all satisfy (within the specified accuracy) the necessary conditions of optimality.

Fig. 5
figure 5

Evolution of the primal infeasibility over the iterations for the three methods for scenario [0, 0, 2]

Fig. 6
figure 6

Evolution of the Lagrange multipliers over the iterations for the three methods for scenario [0, 0, 2]

In Fig. 5, the corresponding evolutions of the primal feasibilities (infinity norm) are shown. As can be seen in Fig. 5a, in this scenario the sub-gradient method first converges to a point at which a further adaptation of the Lagrange multipliers, shown in Fig. 6a, does not have an effect on the primal feasibility. This is due to a set of local constraints being active. Once this active set changes before iteration 2000, the primal feasibility decreases further and the method converges towards a feasible solution.

For ADMM, the primal feasibility does not steadily decrease but oscillates. Due to the second optimality criterion of dual feasibility, the scheme continues to iterate even when the overarching constraints are satisfied for all intervals. As can be seen in Fig. 6b, starting from iteration 20, the Lagrange multipliers oscillate around their final values before they converge.

Similar to ADMM, ALADIN decreases the infinity norm of the primal feasibility quickly, cf. Fig. 5c. Thereafter, the primal feasibility spikes either when the active set in the QP approximation changes or when new overarching constraints become active. The latter can be seen in Fig. 6c by the new non-zero Lagrange multipliers. The former is only indirectly visible by the significant changes in the Lagrange multipliers. The initial spike in the Lagrange multipliers \(\lambda\) results from \(\lambda _{QP}\) being calculated based on the wrong active set in the coordinator level QP. Different from the [0, 0, 0] scenario, several iterations are necessary for the Lagrange multipliers, cf. Fig. 5c, to converge to the values that yield the inputs in Fig. 4d.

In Table 3, the results for different scenarios are given. The scenarios are permutations of the starting times between 0 and 8 h. The objective value is scaled by a factor of − 1000, such that higher values correspond to better objectives. This table is continued in Table 5 with the remaining permutations of the starting times between 0 and 16 h. If the objective value of a distributed method is slightly better than that of the monolithic solution method, this results from the overarching constraints being satisfied only to the specified tolerance \(\epsilon _{Feas}\). The monolithic solution is accurate to the precision of IPOPT, which is set to \(10^{-12}\). Feeding slightly more into the reactors leads to these small differences in the objective values. Even though it is mostly not reflected in the objective values, the resulting trajectories for the inputs do not always have the same arc structure. In addition to the objective values, also the number of necessary iterations, with the value of the best distributed method highlighted in bold, as well as the numbers of coordinated intervals (# Coord. Ints.), i.e., intervals with active overarching constraints, are shown.

Table 3 Results for \(\varDelta t = 4\) h and fixed final times

The number of coordinated intervals correlates with the objective value since if fewer intervals need to be coordinated, this means that for more intervals there is no active constraint on the usage of the resources. It is no surprise that the scenarios where the starting times are further apart as well as when the reactors start later, cf. [0, 0, 2] and [0, 2, 2], have a better objective value. The latter results from the fact that \({\partial n_{C,i}}/{\partial t}\) decreases once the path constraints on \(c_{B,i}\) is active.

The influence of the granularity of the time discretization on the number of necessary iterations depends on several factors. In general, it can be said that increasing the discretization interval \(\varDelta t\) leads on the average to fewer intervals that have to be coordinated and the number of reactor specific constraints that can be active decreases significantly. This can be seen by comparing the results of \(\varDelta t = 4\) h with the results in Tables 6 and 7 in the Appendix, where the time discretization is changed to \(\varDelta t = 8\) h and \(\varDelta t = 16\) h, respectively.

The average number of iterations for the sub-gradient method approximately halves if the time discretization interval is doubled. For all three discretizations, the sub-gradient method has the highest variance. For ADMM the number of iterations is similarly reduced, however, the method converges much more consistently, i.e., the different scenarios do not influence the number of iterations as much as for the other methods. ALADIN exhibits a larger spread for the time discretizations \(\varDelta t = 4\) h and \(\varDelta t = 8\) h, however, requires significantly fewer iterations for \(\varDelta t = 16\) h. An example, where ALADIN requires many iterations is scenario [0, 1, 4], where the evolution of the primal infeasibility and of the Lagrange multipliers are shown in Fig. 7. Between iteration 30 and 140, the stepsize is too large for the algorithm to find the correct QP approximation. Once this set is found, the algorithm converges, however with small steps, such that another 60 iterations are required.

Fig. 7
figure 7

Evolution of the primal infeasibility and the Lagrange multipliers over the iterations of ALADIN for scenario [0, 1, 4], where only when the \(\alpha _{i}\) are small enough, the method converges

Varying the number of reactors does not make the problem significantly harder. For instance, changing the number of reactors while maintaining their starting position (e.g., [0], [0, 0, 0] and [0, 0, 0, 0, 0, 0]) and adapting the available amount \(u_{shared,max}\) accordingly, i.e., by multiplication with the new number of reactors divided by the old one, does not influence the necessary number of iterations, except for the initial phase, since the solution has the same structure. This can be seen in Fig. 8a–c, where the structure of the imbalances is similar.

Fig. 8
figure 8

Evolution of the primal feasibilities using the sub-gradient method for different numbers of reactors and available amounts of the shared resource \(u_{shared,max}\)

The difference in the final numbers of iterations results from the difference in the initial imbalance and the different adaptation of the stepsizes \(\alpha\). The evolution of the prices and of the final structure of the solution are similar.

If however \(u_{shared,max}\) is kept constant while the number of reactors is changed, the structure of the solution and the trajectory of the multipliers change, cf. Fig. 8d, where the maximum amount of \(u_{shared,max} = 0.05\) l/h is distributed between only two reactors.

5.2 Comparison of the methods for variable final times

The three methods are also compared for problems with enforced terminal constraints. The final times are initialized as in the previous case but changed during the optimization.

Fig. 9
figure 9

Evolution of the final times for scenario [0, 0, 2] with free final times

Fig. 10
figure 10

Evolution of the Lagrange multipliers for scenario [0, 0, 2] with free final times

Fig. 11
figure 11

Resulting input profiles for the reactors for scenario [0, 0, 2] with free final times

Additionally to the properties from the previous subsection, the points in time when the final time changes can be evaluated. In Figs. 9 and 10, the evolution of the final times and of the Lagrange multipliers is shown for scenario [0, 0, 2]. It can be seen that the distributed methods converge in the considered cases to the same final times. In the case of the sub-gradient method, as a result of the high fluctuations in the Lagrange multipliers at the beginning, also the final times change significantly until the stepsizes are adjusted accordingly. The large numbers of required iterations are caused by infinitesimal stepsizes resulting from the automatic adaptation of the stepsizes. For ADMM significantly fewer changes can be observed. ALADIN finds the vicinity of the optimal Lagrange multipliers even more quickly, and fewer changes in the final times occur. The number of iterations stays in a similar range as for the case with fixed final times, which can be seen in Table 4 and the resulting distribution of the feed rate between the reactors can be seen in Fig. 11.

Table 4 Results for \(\varDelta t = 4\) h and variable final times

In Table 4, as an additional column, the satisfaction of the terminal constraint at convergence is added. The solutions deviate more from the monolithic optimization for the case with free final times. However, for the considered cases and also the ones in the Tables 89, and 10, the heuristic finds feasible solutions. Similarly as with the fixed final times, the smaller \(\varDelta t\) is, the more often the distributed solution methods converge to the monolithic solution.

6 Discussion

In the following, the results for the distributed optimization methods as well as for the free final times heuristic are discussed and analyzed. By coordinating the shared resource consumption between the individual reactors using Lagrange multipliers, the structure of the optimal solutions for the individual reactors may change. In this case, additional sensitivity seeking arcs arise instead of constraint seeking arcs. So the example shows that both constraint seeking arcs and sensitivity seeking arcs in the sub-problems can be handled.

Since in most cases trajectory optimization is not convex and thus part of a problem class for which few general statements can be made, a qualitative evaluation of the suitability of the methods is done.

6.1 Comparison of the distributed optimization methods

While for ALADIN an extension exists that guarantees convergence to a local minimum using de facto monolithic optimization steps to determine \(\alpha _{1}\), \(\alpha _{2}\), and \(\alpha _{3}\), for ADMM and for the sub-gradient method no proofs exist that these methods necessarily converge in the non-convex case. Furthermore, it should be noted that all the considered methods based on dual decomposition are infeasible path methods such that only at convergence, the resulting trajectories satisfy the overarching constraints. Nonetheless, in all considered scenarios the distributed optimization methods found feasible solutions with respect to the overarching constraints and converged to at least a local minimum using the parameters given in Table 2.

The sub-gradient method for inequality constrained problems adapts the stepsizes automatically, which has the advantage that no prior information on the different sub-systems is required. This comes at the cost that for each scenario and for each optimization run a significant number of iterations is required to determine the stepsizes, which are conservative enough such that the active set of the inequality constraints on the shared resources stays mostly the same. Once these stepsizes are found, the method converges slowly and the final number of iterations can vary significantly, especially if local constraints are active, which can prevent improvements of the solution for many iterations as seen for instance in Fig. 5a. While one might conclude from Tables 3 and 4 that the sub-gradient method requires significantly more iterations for the free final times, the continuation in Tables 5 and 8 shows that this is not true. High numbers of necessary iterations are caused by the small stepsizes resulting from the scheme in Eq. 15.

The benefit of the sub-gradient method is its simplicity. The augmentation of the objective function can be interpreted economically as the cost for use of the shared resource and on the coordination layer, the update mechanism matches supply and demand via the prices.

While ADMM does not need more information from the different sub-systems than the sub-gradient method, it introduces artificial penalization terms to regularize the deviations from feasible solutions. This comes with the advantage of a significantly improved speed of convergence in the considered cases. Whether this is acceptable depends on the situation: if distribution is mostly used as a tool to distribute the computational load or to robustify the optimization, it will probably not matter. If the goal is to coordinate the sub-systems while they only optimize their local cost function, it may not be acceptable.

ALADIN uses much more information from the different sub-systems, including state variables as well as derivatives of objective and active constraints, to create QP approximations. As a consequence, ALADIN converges in most cases significantly faster than the other methods, which is also described in the literature by Engelmann and Faulwasser (2019) and Jiang et al. (2017). However, this works only when the QP approximations are accurate. In distributed trajectory optimization there are two factors that can make the approximation difficult: highly non-linear constraints and changing active sets. The former one is the result of the non-linear model equations. The second results from the fact that small changes in the Lagrange multipliers can completely change the active set or the arc structure of the solutions. As long as the active set is not correct, \(\lambda _{QP}\) will not be optimal and ALADIN will not converge. Thus, an adaptive scheme for the stepsizes was presented in Algorithm 5 that, by decreasing the stepsize with the number of iterations, prevents oscillation between different active sets.

In the original ALADIN paper (Houska et al. 2016), the authors recommend that a sufficiently large penalty parameter \(\rho\) has to be chosen for the method to converge with a super-linear rate. We found that for the considered problems, choosing \(\rho\) too large resulted in indefinite Hessian matrices \(\mathscr {H}_{i}\). The difficulty to choose \(\rho\) and \(\kappa\) results from the following trade-off: If \(\rho\) is chosen large, then the Lagrange multipliers \(\mu _{i}\) of the local constraints of the different sub-systems can become large, if the \(z_{i}\) variables are not feasible, which in turn can lead to negative definite Hessian matrices \(\mathscr {H}_{i}\). Since \(\mathscr {H}_{i}\) must be positive definite for the coordinator level QP to yield meaningful updates of \(\lambda\) and \(z_{i}\), the parameter \(\kappa\), which increases the eigenvalues, must be selected sufficiently large. If however \(\kappa\) is large, then the coordinator level QP will yield very small \(\varDelta z_{i}\), which again yields small steps towards the optimum. Thus an adaptive scheme was used to prevent indefinite Hessian matrices while eventually allowing larger changes in the reference variables \(z_{i}\).

These adaptations to ALADIN were made to ensure convergence for all considered scenarios, which is, of course, a trade-off since without these adaptations many scenarios converge much faster.

In general, in some real settings, sharing the gradients of the local objectives may not be acceptable, as this may allow the coordinator to decipher the local cost structure.

6.2 Evaluation of the heuristic for the satisfaction of the terminal constraints

In all considered scenarios with free final times, the terminal constraints were satisfied for the distributed solutions. This can in general not be guaranteed and including this check along with primal and dual feasibility as a convergence criterion can prevent convergence. We thus recommend for the application of distributed optimization with free final times to check the feasibility of all sub-problems at convergence of the distributed optimization method. If this is not satisfied upon convergence, a fallback to a search space with worse objectives can be implemented. For the considered case study, this could, for example, be to increase the final times for all sub-systems and re-optimize without adapting the final times, which would eventually guarantee the satisfaction of the terminal constraint. Other fallbacks could be to allocate 1/N of the shared resources to each reactor or to partly disaggregate the profiles.

With respect to the necessary number of required iterations to converge, it can be said that the proposed strategy to adapt the final times can be integrated into the iterative methods without a significant influence on the overall number of iterations.

7 Conclusions

In this contribution, different methods for distributed trajectory optimization, in which the objective values of the sub-systems are not shared, were investigated. As an example, the trajectories of semi-batch reactors that are connected via overarching constraints on the feed rates were optimized. We evaluated and compared different methods based on the optimization of the trajectory of a benchmark semi-batch reactor and showed that for the considered case, convergence to local minima was achieved. Furthermore, a heuristic was proposed to include the final times of the different trajectories as degrees of freedom. Since the considered problem is not convex, a quantitative analysis of the results was done and possible obstacles for the application of the distributed optimization methods to other trajectory optimization problems were pointed out. In the distributed optimization, the structure of the arcs of the optimal solution may be different from the structure of the solutions for the sub-system problems as constraints for the sub-problems are now dualized.

The three investigated methods, the sub-gradient method, ADMM, and ALADIN, provide different trade-offs between sharing of information and rate of convergence. As expected, in general it can be said that the more information is exchanged between the coordination level and the sub-system level, the faster the methods converge. Since ADMM showed a consistent rate of convergence and it requires no additional information from the sub-systems beyond the resource consumptions, it is recommended as the first choice for distributed trajectory optimization problems if confidentiality is of importance.

The results can also be applied in various other domains where resources have to be allocated or shared between different dynamic systems, e.g., in the coordination of plug-in electric vehicles, the coordination of autonomous robots, distributed control, etc.

Future work will focus on improving the convergence of ALADIN for problems with overarching inequality constraints by better exploiting the available information on the active sets from the sub-problems. There are other techniques than SQP, e.g. interior point or active set methods, which might be adapted to the application with ALADIN to enable faster convergence to the correct active set. Furthermore, the characteristics of the trajectory optimization problems which can be solved with the proposed methods, or more specifically, what structure of arcs and what type of terminal constraints can be coordinated, should be investigated.