# Simulation of multibody systems with servo constraints through optimal control

- 812 Downloads

## Abstract

We consider mechanical systems where the dynamics are partially constrained to prescribed trajectories. An example for such a system is a building crane with a load and the requirement that the load moves on a certain path.

Enforcing this condition directly in form of a servo constraint leads to differential-algebraic equations (DAEs) of arbitrarily high index. Typically, the model equations are of index 5, which already poses high regularity conditions. If we relax the servo constraints and consider the system from an optimal control point of view, the strong regularity conditions vanish, and the solution can be obtained by standard techniques.

By means of the well-known \(n\)-car example and an overhead crane, the theoretical and expected numerical difficulties of the direct DAE and the alternative modeling approach are illustrated. We show how the formulation of the problem in an optimal control context works and address the solvability of the optimal control system. We discuss that the problematic DAE behavior is still inherent in the optimal control system and show how its evidences depend on the regularization parameters of the optimization.

### Keywords

Servo constraints Inverse dynamics High-index DAEs Optimal control Underactuated mechanical systems## 1 Introduction

We consider mechanical systems with servo constraints; see, e.g., [7, 8, 19], for which a part of the motion is specified. This includes crane models where one seeks for an input that makes an end effector follow a prescribed trajectory. Often, these configurations are called *inverse dynamics* problems or, since the number of degrees of freedom exceeds the number of controls, *underactuated mechanical systems*. A recent overview of the diversity of given applications can be found, e.g., in [9, 27]. In this paper, we consider a two-car example [8, Sect. II], its generalization to \(n\) connected cars, and an overhead crane example [5, Ex. 4]. In all of these examples, the model equations are *differentially flat*, which means that the input can be expressed and determined solely by means of the desired output and its derivatives [12].

In the direct modeling approach, the inputs are regarded as variables, whereas the desired output is formulated as a constraint. This constraint makes the model equations a system of differential-algebraic equations (DAE) even though the dynamics of the system may be given in the form of an ODE. Even more, the resulting DAE is typically of high index so that for its application in simulations, we need to employ a suitable *index reduction* [1, 5, 7]. Apart from the numerical difficulties that come with high index DAEs, an immediate drawback of the DAE approach is that a solution to the problem can only exist if the target trajectory is sufficiently smooth.

As a cure to both mentioned shortcomings of the DAE approach, we consider an optimal control approach that relaxes the constraints and balances the approximation to the target with the control effort. The relaxation of the constraint \(Cx = y\) softens the strong regularity assumptions in the DAE setting. In theory, the desired trajectory \(y\) may then be even discontinuous.

The same approach was discussed in [26], however, without analyzing the optimality system such that, due to otherwise inconsistent boundary conditions, it becomes necessary to introduce an additional regularization.

Apart from inconsistent boundary conditions, also nonsmooth data may cause problems—not in theory but in the practical application of the optimization approach. In other words, the DAE problematic is not simply gone since the weak coupling of the masses and the noncollocated sensors and activators may lead to oscillations in the output and strong peaks in the (unknown) input. A remedy is the penalization of the derivatives of the control force at the expense of a worse performance and of less standard systems of equations for the numerical realization.

- 1.
The very weak coupling of input and output that, in the (DAE) limit, may lead to singular actuations. We will investigate the dependency of the penalty or regularization parameters and the behavior in the limit case.

- 2.
Necessary and sufficient optimality conditions with an emphasis on their use for the solution of the optimization problem. Particularly, in the case of holonomic constraints, the formally derived first-order optimality conditions are preferable over alternative formulations, but they may not be solvable due to inconsistent data.

The paper is organized as follows. In Sect. 2, we give the formulation of the servo-constraint problem as a DAE for which additional holonomic constraints are allowed. The counterpart is then presented in Sect. 3 in which the same configuration is modeled as an optimal control problem. Here we formulate the optimality conditions, discuss the consistency conditions of the boundary data, and prove the existence of an optimal solution. The relation of the two approaches is then the topic of Sect. 4, in particular, the optimal control problem in which the input is not penalized is analyzed. Section 5 gives an overview of the different solution strategies. This includes the DAE case and the optimal control approach for which boundary-value problems have to be solved. In Sect. 6, we compare the two approaches by means of two numerical examples. Finally, a conclusion is given in Sect. 7.

## 2 DAE setting

This section is devoted to the original formulation of the servo-constraint configuration as a high-index DAE. For this, we consider the dynamics of a mechanical system with servo and possibly holonomic constraints. We start with a prototype of a mechanical system with servo constraints.

### Example 1

Example 1 is a realization of a general second-order system with inputs and servo constraints. In the DAE formulation, where the input is considered as an unknown variable, it reads as follows.

### Configuration 1

Another example of Configuration 1 is given by the generalization of Example 1 to a spring-mass chain of \(n\) masses connected by \(n-1\) springs.

### Example 2

In order to model general multibody systems, such as the crane model in [5, Ex. 4], we need to include holonomic constraints of type \(g(x)=0\), where \(g\) is a suitable possibly nonlinear function. Therefore, we extend Configuration 1 as follows.

### Configuration 2

The following observations and assumptions will be formulated for Configuration 2, but they directly apply to Configuration 1 if we omit \(g\) and \(G^{{\mathsf{T}}}(x)p\). From DAE theory it is known that the derivatives of the right-hand side of system (4a)–(4d) appear in the solution [4, Ch. 2]. Because of the assumed semiexplicit structure of the system, only the derivatives of \(g\) and \(y\) are part of the solution and, in particular, of the desired input \(u\). We can show that in the index-5 case of Example 1, the input depends on the 4th derivative of \(y\), whereas for the \(n\)-spring-mass chain of Example 2, it depends on \(y^{(2n)}\). Thus, the following assumptions are made to guarantee a continuous solution \(u\) of Configuration 2 or 1.

### Assumption 1

*DAE setting*)

*In the formulation of Configurations*2

*and*1,

*it is assumed that*:

- 1.
*Smoothness of the data*: \(f\in\mathcal{C} ([0,T]; \mathbb {R}^{n})\)*and*\(y\in\mathcal{C} ^{\nu_{d}-1}([0,T]; \mathbb{R}^{m})\),*where*\(\nu_{d}\)*is the*(*differentiation*)*index of the system equations*. - 2.
*Consistency of the initial values with respect to the holonomic constraints*: \(g(x^{0})=0\)*and*\(G(x^{0})v^{0}= 0\),*if applicable*. - 3.
*Consistency of the initial values with respect to the target output*: \(Cx^{0}= y(0)\)*and*\(Cv^{0}= \dot{y}(0)\),*and also the conditions that result from the insertion of the differential equation into the servo constraint*,$$\begin{aligned} \ddot{y} = C \ddot{x} = CM^{-1} \bigl( Ax + G^{{\mathsf{T}}}(x)p + B u + f\bigr). \end{aligned}$$

### Remark 1

We emphasize that the numerical integration of high-index DAEs involves many difficulties; see, e.g., [21, Ch. II]. Furthermore, the high-index property and the resulting assumptions hypothesize that the DAE formulation (4a)–(4d) does not provide an appropriate model. Thus, we propose a remodeling process that leads to an optimal control problem. This involves a modeling error that is adjustable by a parameter as discussed in the following sections.

### Remark 2

From the observation that the solution of an index-\(\nu_{d}\) system depends on the \(\nu_{d}\)th derivative of the right-hand side it follows that the index is closely related to the so-called *relative degree*; see [18] for a definition. Indeed, for the examples considered in this paper, the rule of thumb that *the relative degree is equal to the index minus 1* applies. However, since the same input-to-output behavior can be realized through DAEs of different indices, this rule does not hold in general.

## 3 Formulation as optimal control problem

Instead of prescribing the servo constraint (4c) in a rigid way, we can formulate it as the target of an optimization problem. Accordingly, the solution does not follow the trajectory exactly, which allows for a less regular target, and which typically leads to smaller input forces.

### Configuration 3

The parameters \(Q\), \(S\), and \(R_{0}, \ldots, R_{\nu}\) can be chosen to meet certain requirements for the minimization. With this, we may install different kinds of penalizations of the derivatives of this input variable \(u\). Note that \(R_{0}, \ldots, R_{\nu}\) describe the modeling error compared to the DAE formulation in Configuration 2; see also the discussion in Sect. 4.

Note that in Configuration 3 the fulfillment of the constraint (4c) is balanced with the cost of the input \(u\), including its derivatives up to order \(\nu\). Also, the necessary smoothness conditions on \(y\) for a continuous solution \(u\) and the consistency condition of the initial values with respect to the target output (see Assumption 1) can be relaxed.

### Assumption 2

*Optimal control setting*)

*In the formulation of Configuration*3

*it is assumed that*:

- 1.
*Smoothness of the data*: \(f\)*and*\(y\)*are continuous on*\([0,T]\). - 2.
*Consistency of the initial values with respect to the holonomic constraints*: \(g(x^{0})=0\)*and*\(G(x^{0})v^{0}= 0\),*if applicable*.

### 3.1 Optimality conditions

We derive *formal* optimality conditions for the optimization problem in Configuration 3; cf. [25, Ch. 6] and [22] for the DAE case with holonomic constraints. In order to apply the standard variational approach, we have to make the following assumption.

### Assumption 3

*For any input* \(u\), *the state equations* (4a)*–*(4b) *with* (4d) *have a unique solution* \(x = x(u)\), *and the map* \(u \mapsto x(u)\) *that assigns an input * \(u\) *to the corresponding solution is differentiable*.

### Problem 1

### Remark 3

In general, for \(\nu>0\), the space of suitable variations \(\delta _{u}\) depends on the incorporation of the inputs in the optimality problem. Either, we may add the derivatives of the input to the cost functional as in (5) without any specifications of initial conditions or incorporate the inputs \(\dot{u}, \dots, u^{(\nu-1)}\) as part of the state vector as in [25, Rem. 3.8]. In the latter case, initial conditions have to be stated for \(u(0), \dots, u^{(\nu-1)}(0)\), which may be unphysical. Furthermore, the corresponding variations \(\delta_{u}\), \(\dot{\delta}_{u}\), \(\ddot{\delta}_{u}, \ldots, \delta_{u}^{( \nu-1)}\) have to vanish at \(t=0\).

### Remark 4

### 3.2 First-order formulation

### 3.3 Necessary conditions for the existence of an optimal solution

In this subsection, the particular aspect of *consistency* as it arises in the context of optimal control of DAEs is discussed. Also, a mathematical result that can be used two overcome this *consistency* issue in the case of optimal control of multibody systems with possibly nonlinear holonomic constraints is presented.

If the optimality system (9a)–(11) has a solution, then it provides necessary optimality conditions for \((x(u),u)\). However, in the considered DAE context, i.e., when holonomic constraints are applied, it may happen that the optimization problem has a solution whereas the *formal* optimality system is not solvable [23]. Apart from the general case that the boundary values do not permit a solution [2], for a DAE, a solution may not exist because of insufficient smoothness of the data or because of inconsistent initial or terminal values.

Similar conditions in a slightly different formulation have been reported in [26], where the authors proposed the variants to remove the end point penalization from the cost functional or to consider a regularization of the dynamical equation. Within this regularization, the constraint (4b) is replaced by its derivative.

The following theorem shows that, instead of the state equations, we can modify the cost functional. This modification ensures consistency while affecting neither the performance criterion nor the necessity of the formal optimal conditions.

### Theorem 1

*Ensuring consistency*)

*Let*\(P_{x^{*}(T)}\)

*be a projector onto the kernel of*\(G(x^{*}(T))\)

*that satisfies*\(M^{-{\mathsf{T}}}P_{x^{*}(T)}^{{\mathsf{T}}}=P _{x^{*}(T)}M^{-{\mathsf{T}}}\).

*Then*,

*replacing the terminal conditions*(10)

*for*\(\lambda\)

*by the conditions*

*ensures consistency of the terminal conditions for*\(\lambda\).

*Moreover*,

*if*\((x^{*},p^{*},u^{*},\lambda,\mu)\)

*solve the optimality system with*(17),

*then*\(u^{*}\)

*is a stationary point of*(7).

### Proof

### Remark 5

In the general case, \(P_{x^{*}(T)}\) is defined implicitly since it depends on the unknown solution \(x^{*}\). In the case of linear holonomic constraints, \(P_{x^{*}(T)}\) is readily computed; see [15, Rem. 8.20]. As in the example presented further, in order to ensure consistency of the terminal conditions, we may also use a projection onto a subspace of \(G(x(T))\) that is possibly independent of \(x\). This, however, will effectively alter the performance criterion \(\mathcal{S}\).

### Remark 6

If \(M\) is symmetric, then the condition \(M^{-{\mathsf{T}}}P_{x^{*}(T)} ^{{\mathsf{T}}}=P_{x^{*}(T)}M^{-{\mathsf{T}}}\) is the orthogonality condition in the inner product induced by \(M^{-1}\), which is the natural inner product in PDE applications.

### 3.4 Existence of optimal solutions

For Configuration 3 constrained by linear equations without holonomic constraints as in (2a), the existence of solutions is provided by well-known results.

### Lemma 1

(*Existence of an optimal solution*) *For* \(\nu\geq0\), *consider the optimal control problem with cost functional* (5) *constrained by* (2a) *and let Assumption * 2 *hold*. *If* \(R_{\nu}> 0\) *and if* \(u(0)\), \(\dot{u}(0), \ldots, u^{(\nu-1)}(0)\) *are given*, *then system* (9a)*–*(9e) *and the optimal control problem have a unique solution for any* \(T<\infty\) *and initial data* \(x^{0}\) *and* \(v^{0}\).

### Proof

Recall that, by the standard order reduction approach, the second-order system (9a)–(9e) can be reformulated as an equivalent first-order system; see Sect. 3.2.

Then, for \(\nu=0\), the result is given in [25, Rem. 3.6]. For \(\nu=1\) with \(R_{1} >0\), we may introduce a new variable for the derivative of the control \(u\). Interpreting \(u\) as a part of the state variable whereas its derivative \(v:= \dot{u}\) is the new control variable, the same arguments apply; cf. [25, Rem. 3.8]. Note that this ansatz requires an initial value for \(u\). This procedure may be successively repeated for \(\nu> 1\). □

### Remark 7

Note that the existence result in Lemma 1 is true for all initial values \(x^{0}\) and \(v^{0}\) in contrast to the DAE (4a)–(4d), which requires consistent initial data. The case \(\nu=0\) with \(R_{0}= 0\) yields again the DAE formulation of the problem and thus, needs consistent boundary conditions. This case is discussed in Sect. 4.

For the nonlinear optimality system (9a)–(9e) with holonomic constraints, we use the strong but reasonable assumption that the state equations (4a)–(4d) have a solution for any input \(u\) under the consideration that the solutions of the state equations depend smoothly on the input (Assumption 3) to state that existence of a solution to the formal optimality system is indeed a necessary condition for optimality.

We first show that the adjoint equations have a solution for every state trajectory and, thus, also at the optimal solution. Then we confer that the smoothness of the input to state map implies that, at an optimal solution, the gradient condition (9e) must also be fulfilled.

### Lemma 2

*Consider a solution*\((x, p)\)

*of*(4a)

*–*(4d).

*If*\(g\)

*is sufficiently smooth and*\(G(x(t))\)

*has full row rank for all*\(t\in[0,T]\)

*and if the end condition*

*is consistent*,

*then the adjoint equations*(9c)

*–*(9d)

*with end condition*(19)

*have a unique solution*.

### Proof

### Theorem 2

*Assume that* \(u\mapsto x\) *is Lipschitz continuous*. *If for* \((x(u_{0}), u _{0})\), *the constraints and the cost functional are Gâteaux differentiable with respect to* \(x\) *at* \(x(u_{0})\) *and if the terminal conditions* (10) *are consistent*, *then the optimality system* (9a)*–*(9e) *is a necessary condition for optimality of* \((x(u_{0}), u_{0})\).

### Proof

At every candidate solution \(x(u_{0})\), the adjoint equations (9c) and (9d) with (10) are solvable by Lemma 2. Then, the claim follows from the result given in [15, Thm. 5.5]. □

Concerning sufficiency for the existence of unique global or local solutions, general results for constrained optimization extended to optimal control problems can be consulted; see, e.g., [15, Ch. 5.3].

### 3.5 Various optimality systems

#### 3.5.1 Case \(r=0\), \(\nu=0\)

#### 3.5.2 Case \(r=0\), \(\nu=1\)

#### 3.5.3 Case \(r=0\), \(\nu=2\)

#### 3.5.4 Case \(r>0\), \(\nu=0\)

## 4 Comparison of DAE and optimal control solutions

To discuss the qualitative behavior of the solutions of the optimal control problem, we consider the linear case without holonomic constraints (Configuration 1) and, in particular, discuss the \(n\)-element mass-spring chains as in Example 2.

In the optimal control setting of Sect. 3, it is reasonable to assume that \(R_{\nu}\) is positive definite. In the sequel, we analyze the limit case with \(\nu=0\) and \(R_{0}=0\), i.e., the case in which the control is not constrained at all.

### 4.1 Equivalence for \(R_{0}=0\)

We show that for Example 2 with \(\nu=0\) and \(R_{0}=0\), the DAE approach of Configuration 1 is equivalent to the optimal control formulation in Configuration 3, provided that \(Q>0\). Note that this implies that the corresponding optimality system is only solvable for \(y\in\mathcal{C} ^{2n}([0,T]; \mathbb{R})\). Recall that \(n\) denotes the number of coupled cars.

### Remark 8

The preceding observation is an instance of the general fact that if the linear system without holonomic constraints is controllable and observable and if \(Q\) is invertible, then, provided that the data is sufficiently smooth, a solution to the optimal control problem (Configuration 3) resembles the solution of the DAE of Configuration 1. To see this, recall that in the considered situation, the system is observable if and only if \(Cx-y=0\) implies \(x_{1}=y\), and, by duality, that the system is controllable if and only if \(B^{{\mathsf{T}}}\lambda= 0\) implies that \(\lambda= 0\) for all time.

### Remark 9

The equivalence of the DAE and optimal control approach for \(R_{0}=0\) can also be shown for the overhead crane from the example in Sect. 6.3, which includes a holonomic constraint. In this case, we can show in a similar manner that the dual variables \(\lambda\) and \(\mu\) vanish, so that the servo constraint \(Cx=y\) has to be satisfied.

### 4.2 Convergence barriers

By Lemma 1, if \(y\in\mathcal{C}([0,T], \mathbb{R})\) and \(R_{0}>0\), then Configuration 3 with \(\nu=0\), subject to linear constraints, has a unique solution. By the results of the previous Sect. 4.1, for \(R_{0}=0\), a solution only exists if \(y\) sufficiently smooth.

- 1.
For nonsmooth \(y\), where we cannot expect the convergence of \(x_{1}\) to \(y\), the control \(u\) has strong peaks in its derivatives in order to fulfill (27) or (28) as \(R_{0} \to0\).

- 2.
For moderate values of \(R_{0}\), the tracking error \(x_{1}-y\) is affected by the oscillations in the derivatives of \(u\) multiplied by multiples of \(\frac{1}{k}\) depending on the length of the considered chain.

- 3.
As \(k\to\infty\), i.e., when the connections between the cars become more rigid, the higher derivatives of \(u\) are damped out from the tracking error. In fact, if one connection is rigid, then the two connected cars can be considered as one, and the index of the system reduces.

## 5 Solution strategies

Within this section, we review several concepts how to solve numerically mechanical systems with servo constraints. First, we comment on the classical approach where the model is given by the DAE (4a)–(4d). In this case, index reduction methods are applied, which then allow us to integrate the resulting equations. Second, using the optimal control ansatz (5), we consider the two cases of either solving directly the optimality system, which is a boundary value problem (BVP), or the resulting Riccati equations. The latter approach may then be used to define a feedback control.

### 5.1 Solving high-index DAEs

As mentioned already in the introduction, the computation of the inverse dynamics of a discrete mechanical system given by a specification of a trajectory is a highly challenging problem [7, 8]. The reason is the high-index structure of the resulting DAEs. In the case of underactuated mechanical systems considered here, the systems are often of (differentiation) index 5 but may be arbitrarily high as shown in Example 2.

In order to realize the so-called *feedforward control* of the system, we have to solve this high-index DAE. Here, it is advisable to apply index reduction methods instead of solving the equations directly [4, Ch. 5.4]. A well-known approach based on a projection of the dynamics was introduced in [8]; see also [6]. For this, we have to compute time-dependent projection matrices in order to split the dynamics of the underactuated system into constrained and unconstrained parts.

Instead, we may also use the index reduction technique called *minimal extension* [20]. This technique profits from the given semiexplicit structure of the dynamical system and can be easily applied. The application to a wide range of crane models can be found in the recent papers [1, 3]. Therein, it is shown that the method of minimal extension may even be applied the second time, which then leads to a DAE of index 1 for which the numerical integration works essentially as for stiff ODEs [17, Ch. VI.1].

We remark that index reduction techniques are inevitable for numerical simulations of high-index problems. However, for applications like the \(n\)-car example given in the introduction, which is of index \(2n+1\), the DAE approach does not seem to be applicable. The modeling presented here as an optimal control problem still works properly for the general case. For a numerical example including a 3-car model, we refer to Sect. 6.

### 5.2 Direct solution of the optimality BVP

In this subsection, we discuss the application of the finite difference method and shooting approach in order to solve the optimality system (9a)–(9e).

#### 5.2.1 Finite differences

The optimality system includes both initial and terminal conditions, so that the application of standard time-stepping methods is not possible. A straight-forward approach is to introduce a grid of the time domain and to apply the method of finite differences to the differential coupled equations, which leads to a (large but block-sparse) algebraic system. Alternatively, we can apply finite elements or more general collocation methods.

#### 5.2.2 Shooting method

For the application of the shooting method, we consider the first-order system (13), i.e., we consider again the case with \(r=0\) (no additional holonomic constraint) and \(\nu=1\).

*reduced superposition*[2, Ch. 4.2.4]. This reduces the computational effort of the method. Within the following algorithm, we denote by \(s\) the size of the original system and, thus, by \(2s\) the size of the first-order system we want to solve.

*Step 1:*Search for the fundamental solution of the corresponding homogeneous system. However, using the reduced superposition, it is sufficient to compute \(Y \in\mathbb{R}^{2s,s}\) solving

*Step 2:*Find a solution \(w\in\mathbb{R}^{2s}\) of the initial value problem

*Step 3:*Find the coefficients \(c\in\mathbb{R}^{s}\) given by the linear system

### Remark 10

(Comparison of computational effort) Assume that we always use the same time step size with \(N\) grid points. Then, the finite difference method leads to a system of size \(2sN\) such that the computational effort is quadratic in \(N\). For the shooting method, we have to solve several initial value problems (each using \(N\) time steps). Note that the size of the systems is bounded by the size of the original BVP such that the overall costs are only linear in \(N\) (but with a large constant depending on \(s^{2}\)).

### Remark 11

A more stable extension of the (single) shooting method is called the *multiple shooting method* [2, Ch. 4.4.3]. For this, the time interval \([0,T]\) is partitioned by shooting points \(0 = t_{1} < t _{2} < \cdots< t_{N+1} =T\). On each subinterval \([t_{i}, t_{i+1}]\), we may compute a solution \(y_{i}(t) = Y_{i}(t)c_{i} + w_{i}(t)\) similarly as before. The coefficient vectors \(c_{i}\in\mathbb{R}^{s}\) are given by a linear system that contains the boundary and the continuity conditions in-between the time steps.

### Remark 12

For the other cases, i.e., for \(r>0\) (with holonomic constraints) or different values of \(\nu\), we may need to use different techniques, depending in the structure of the BVP. In the case \(r=0\), \(\nu=0\) (cf. Sect. 3.5.1, where we obtain as an optimality system an index-1 DAE) and need to consider shooting methods. For this, we refer to [24]. In the case \(r>0\), for which we obtain index-3 systems, we refer to [11] or, after an index reduction to index 2, also to [13].

### 5.3 Riccati approach

In the linear case and if \(\nu=0\), i.e., if no derivatives of \(u\) appear in the optimality system (9a)–(9e), the BVP can be solved via a Riccati decoupling. This requires the formulation as a first-order system as in (13), which already is in the standard form considered; see, e.g., in [25, Ch. 5]. In the case of holonomic constraints, we can use the results on constrained Riccati equations given in [15], which readily apply to constrained multibody equations in the *Gear–Gupta–Leimkuhler* formulation [14].

## 6 Numerical examples

In this section, we provide several numerical experiments. First, we consider the two-car system from Example 1, i.e., an example without holonomic constraints. Second, we add a third car, which then gives an index-7 DAE in the original formulation. Finally, we consider an overhead crane as an example with \(r>0\), i.e., with a holonomic constraint. The code is written in *Python* and can be obtained from the author’s public *Github* repository [16].

### 6.1 Two-car example

#### 6.1.1 Comparison of DAE and optimal control solution

#### 6.1.2 Feedback representations of the optimization solutions

Another advantage of the optimization approach is that the optimal control can be realized as a feedback. In fact, the first-order optimality conditions (9a)–(9e) suggest that \(u\) depends linearly on \(\lambda\), which depends, possibly nonlinearly, on the state \(x\). For the considered linear case of Example 1, the optimality system can be solved via a differential Riccati equation [25, Ch. 5.1], which directly leads to a feedback representation of the optimal control.

### 6.2 Three-car example

### 6.3 Overhead crane

## 7 Conclusion

Within this paper, we have considered mechanical systems with a partly specified motion, which are usually modeled by DAEs of index \(\ge5\). Such models require strong regularity assumptions, and their numerical treatment is extremely challenging because of the sensitivity to perturbations. Because of this, an alternative modeling approach was introduced, which relaxes the prescribed servo constraint and considers a minimization problem instead. By this we decrease the possible errors that occur in the simulation of a high-index DAE but include an additional error since we do not satisfy the constraint exactly. However, this modeling error is controllable by the penalization parameters.

By numerical examples we have shown the advantages of the optimal control approach. First, the resulting control effort is much smaller at the price of only a small error in the constraint and, thus, more realistic since this corresponds to a reduction of costs in real-world applications. Second, the approach is less sensitive to perturbations such as inconsistent initial data.

Finally, we point out that the usefulness of the proposed approach needs to be determined in experiments as well. Thus, the next steps in this development direction will be the inclusion of sensor and actuator dynamics and the implementation in an experimental setup.

## Notes

### Acknowledgements

Open access funding provided by Max Planck Society. The work of the first author was supported by the ERC Advanced Grant “Modeling, Simulation and Control of Multi-Physics Systems” (MODSIMCONMP). Furthermore, parts of the paper were evolved at Mathematisches Forschungsinstitut Oberwolfach within a *Research in Pairs* stay in November 2014. We are grateful for the invitation and kind hospitality.

### References

- 1.Altmann, R., Betsch, P., Yang, Y.: Index reduction by minimal extension for the inverse dynamics simulation of cranes. Multibody Syst. Dyn.
**36**(3), 295–321 (2016) MathSciNetCrossRefMATHGoogle Scholar - 2.Ascher, U.M., Mattheij, R.M.M., Russell, R.D.: Numerical Solution of Boundary Value Problems for Ordinary Differential Equations. SIAM, Philadelphia (1995) CrossRefMATHGoogle Scholar
- 3.Betsch, P., Altmann, R., Yang, Y.: Numerical integration of underactuated mechanical systems subjected to mixed holonomic and servo constraints. In: Font-Llagunes, J.M. (ed.) Multibody Dynamics: Computational Methods and Applications, vol. 42, pp. 1–18. Springer, Berlin (2016) CrossRefGoogle Scholar
- 4.Brenan, K.E., Campbell, S.L., Petzold, L.R.: Numerical Solution of Initial-Value Problems in Differential-Algebraic Equations. SIAM, Philadelphia (1996) MATHGoogle Scholar
- 5.Blajer, W., Kołodziejczyk, K.: A geometric approach to solving problems of control constraints: Theory and a DAE framework. Multibody Syst. Dyn.
**11**(4), 343–364 (2004) MathSciNetCrossRefMATHGoogle Scholar - 6.Blajer, W., Kołodziejczyk, K.: Improved DAE formulation for inverse dynamics simulation of cranes. Multibody Syst. Dyn.
**25**(2), 131–143 (2011) CrossRefGoogle Scholar - 7.Blajer, W.: Index of differential-algebraic equations governing the dynamics of constrained mechanical systems. Appl. Math. Model.
**16**(2), 70–77 (1992) CrossRefMATHGoogle Scholar - 8.Blajer, W.: Dynamics and control of mechanical systems in partly specified motion. J. Franklin Inst. B
**334**(3), 407–426 (1997) MathSciNetCrossRefMATHGoogle Scholar - 9.Blajer, W.: The use of servo-constraints in the inverse dynamics analysis of underactuated multibody systems. J. Comput. Nonlinear Dyn.
**9**(4), 1–11 (2014) Google Scholar - 10.Betsch, P., Uhlar, S., Quasem, M.: On the incorporation of servo constraints into a rotationless formulation of flexible multibody dynamics. In: Proceedings of Multibody Dynamics 2007—ECCOMAS Thematic Conference, 25–28 June Milano, Italy (2007) Google Scholar
- 11.Clark, K.D., Petzold, L.R.: Numerical solution of boundary value problems in differential-algebraic systems. SIAM J. Sci. Stat. Comput.
**10**(5), 915–936 (1989) MathSciNetCrossRefMATHGoogle Scholar - 12.Fliess, M., Lévine, J., Martin, P., Rouchon, P.: Flatness and defect of non-linear systems: Introductory theory and examples. Int. J. Control
**61**(6), 1327–1361 (1995) MathSciNetCrossRefMATHGoogle Scholar - 13.Gerdts, M.: Direct shooting method for the numerical solution of higher-index DAE optimal control problems. J. Optim. Theory Appl.
**117**(2), 267–294 (2003) MathSciNetCrossRefMATHGoogle Scholar - 14.Gear, C.W., Gupta, G.K., Leimkuhler, B.: Automatic integration of Euler–Lagrange equations with constraints. J. Comput. Appl. Math.
**12–13**, 77–90 (1985) MathSciNetCrossRefMATHGoogle Scholar - 15.Heiland, J.: Decoupling and optimization of differential-algebraic equations with application in flow control. PhD thesis, TU, Berlin (2014) Google Scholar
- 16.Heiland, J.: holo-servo-opt—A Python module for the solution of multi-body systems with holonomic and servo constraints via optimal control (2015). https://github.com/highlando/holo-servo-opt
- 17.Hairer, E., Wanner, G.: Solving Ordinary Differential Equations II: Stiff and Differential-Algebraic Problems, 2nd edn. Springer, Berlin (1996) CrossRefMATHGoogle Scholar
- 18.Isidori, A.: Nonlinear Control Systems, 2nd edn. Springer, Berlin (1989) CrossRefMATHGoogle Scholar
- 19.Kirgetov, V.I.: The motion of controlled mechanical systems with prescribed constraints (servoconstraints). J. Appl. Math. Mech.
**31**, 465–477 (1967) MathSciNetCrossRefMATHGoogle Scholar - 20.Kunkel, P., Mehrmann, V.: Index reduction for differential-algebraic equations by minimal extension. Z. Angew. Math. Mech.
**84**(9), 579–597 (2004) MathSciNetCrossRefMATHGoogle Scholar - 21.Kunkel, P., Mehrmann, V.: Differential-Algebraic Equations. Analysis and Numerical Solution. European Mathematical Society Publishing House, Zürich (2006) CrossRefMATHGoogle Scholar
- 22.Kunkel, P., Mehrmann, V.: Optimal control for unstructured nonlinear differential-algebraic equations of arbitrary index. Math. Control Signals Syst.
**20**(3), 227–269 (2008) MathSciNetCrossRefMATHGoogle Scholar - 23.Kunkel, P., Mehrmann, V.: Formal adjoints of linear DAE operators and their role in optimal control. Electron. J. Linear Algebra
**22**, 672–693 (2011) MathSciNetCrossRefMATHGoogle Scholar - 24.Lamour, R.: A shooting method for fully implicit index-2 differential algebraic equations. SIAM J. Sci. Comput.
**18**(1), 94–114 (1997) MathSciNetCrossRefMATHGoogle Scholar - 25.Locatelli, A.: Optimal Control. An Introduction. Birkhäuser, Basel (2001) CrossRefMATHGoogle Scholar
- 26.Nachbagauer, K., Oberpeilsteiner, S., Sherif, K., Steiner, W.: The use of the adjoint method for solving typical optimization problems in multibody dynamics. J. Comput. Nonlinear Dyn.
**10**(6), 061011 (2015) CrossRefGoogle Scholar - 27.Seifried, R., Blajer, W.: Analysis of servo-constraint problems for underactuated multibody systems. Mech. Sci.
**4**(1), 113–129 (2013) CrossRefGoogle Scholar - 28.Sontag, E.D.: Mathematical Control Theory. Deterministic Finite Dimensional Systems. Springer, New York (1998) MATHGoogle Scholar
- 29.Wan, F.Y.M.: Introduction to the Calculus of Variations and Its Applications. Chapman & Hall, New York (1995) MATHGoogle Scholar

## Copyright information

**Open Access** This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.