1 Introduction

In the design and development of many complex multibody systems, researchers have to consider trade-off between various system attributes such as sizing, performance, comfort, or cost. Computational optimization methods are almost always required for most design tasks, and the gradient information of an objective function is heavily exploited in the generation of sensitivities. To this end, a family of black box methods can be distinguished, where the gradient computation routine is agnostic of the underlying dynamics. The main approaches of this kind are finite difference method as well as far more involved automatic differentiation [1].

Conversely, one of the most accurate and computationally efficient methods of calculating derivatives of performance measure with respect to the input variables, such as design or control variables, is based on the mathematical models of multibody system (MBS). In the optimal design and optimal control (OC) of MBS, an implicit dependency exists between state and input variables. There are two major approaches to capturing this dependency: direct differentiation [24] and the adjoint methods [5, 6]. Various formulations of the equations of motion (EOM) yield different adjoint systems, each characterized by different properties [7, 8]. Recent works [911] have demonstrated that using constrained Hamilton’s canonical equations, in which Lagrange multipliers enforce constraint equations at the velocity level, one can obtain more stable solutions for differential-algebraic equations (DAEs) as compared to acceleration-based counterparts [12, 13]. This phenomenon is partially connected with the differential index reduction of the resultant Hamilton’s equations.

The adjoint approach represents a comprehensive computational framework rather than a single method, where many authors develop this broad branch of research in various directions. For example, the adjoint method for solving the nonlinear optimal control problem has been applied in different engineering areas such as parameter identification [14], structural mechanics [15], time-optimal control problems [16], and feedforward control design [17]. In the conceptual setting of nonlinear mechanical systems, the control problem is investigated using the optimal control theory, where an approximate solution of the resulting set of differential-algebraic equations can be obtained employing a broad variety of numerical procedures. Furthermore, the integration routine may be combined with the discretized optimal control problem to come up with the discrete adjoint equations [18, 19]. The adjoint method has also been employed in hybrid systems involving discontinuities or switching modes [20, 21].

Another important aspect lies in the selection of the variables describing both the dynamic and adjoint subspaces. It is a common practice in the field of multi-rigid body dynamic computations to employ a redundant set of coordinates to describe the underlying phenomena. As a result, the dynamic equations of motion are formulated as a set of differential-algebraic equations. Consequently, the adjoint sensitivity analysis also yields a system of DAEs that must be solved to determine the gradient of the performance measure [58]. On the other hand, the derivation of the adjoint system based on the joint-coordinate formulation of the underlying dynamics raises very complicated expressions for the coefficients of the resultant adjoint system. The resulting quantities are much harder to establish systematically than in the case of the redundant DAE formulation [22]. Recent works investigate the relationships and analogies between the adjoint systems formulated as DAEs and ODEs [23, 24], whereas joint-coordinate adjoint formulations are also actively studied in the literature [15].

In this paper, the authors demonstrate a novel formulation of the adjoint method to efficiently compute the gradient of the cost functional that arises in optimal design or optimal control problems. Initially, a multibody system is described in terms of constrained Hamilton’s equations of motion using a redundant set of coordinates. It is well known that such an approach is attractive due to its relative simplicity. However, it leads to a mixed set of differential-algebraic equations, which can be challenging to solve numerically. An alternative is to use a minimal set of coordinates equal to the number of degrees of freedom of a multibody system. Various techniques have been proposed in the literature for coordinate reduction, e.g., [2527]. The same apparatus can be used to establish the relationships between redundant and independent adjoint variables. Ultimately, the adjoint system is formulated as a set of first-order ODEs with right-hand sides that can be easily presented in a closed form [28, 29].

The primary importance of this research may be summarized as follows:

  1. 1.

    We present a novel adjoint-based method for fixed-time optimal control problems that exploits a set of independent adjoint variables incorporated in the Hamiltonian framework.

  2. 2.

    We show the parallels between the formulation of the equations of motion in mixed redundant-joint coordinates and the necessary conditions arising from the minimization of the cost functional.

  3. 3.

    We derive the adjoint system of ordinary differential equations and consistent terminal conditions expressed in a set of joint-space coordinates derived through the use of the joint’s motion and constraint force subspaces.

  4. 4.

    We demonstrate a successful application of the proposed method to gradient computation for the optimal design of a spatial rigid body and the optimal control of a double pendulum on a cart.

This paper is organized as follows. Section 2 presents the equations of motion for rigid multibody systems formulated in redundant and joint-space sets of coordinates. Two complementary sub-spaces are introduced, which correspond to the joint’s motion and constrained directions. Next, in Sect. 3.1, a design sensitivity problem is concisely recalled. The adjoint method tailored to constrained Hamilton’s equations is presented in Sect. 3.2. The concept of independent adjoint variables is introduced in Sect. 4. The developed method is applied to the optimal design of a spatial pendulum and optimal control of a double pendulum on a cart. The results are shown in Sect. 5, followed by a discussion presented in Sect. 6. Ultimately, the conclusions are drawn in Sect. 7.

2 Hamilton’s canonical equations

This section introduces the reader to the appropriate background concerning various formulations of Hamilton’s canonical equations of motion. Section 2.1 is focused on the equations expressed in dependent coordinates. Subsequently, joint space formulation is derived in Sect. 2.2 by projecting the equations of motion onto the joint’s motion and constraint-force subspaces, respectively. Notation and basic symbols that are used in this paper are also explained.

2.1 Constrained formulation of EOM

Let us define a set of dependent variables \(\boldsymbol{\mathrm{q}} \in \mathcal {R}^{n_{q}}\) that uniquely describe the configuration of a multibody system. Since the number of configuration variables exceeds the number of degrees of freedom, there exist \(m\) algebraic constraints that can be symbolically written as: \(\boldsymbol{\mathrm{\Upphi }}(\boldsymbol{\mathrm{q}}) = \boldsymbol{\mathrm{0}}\), \(\boldsymbol{\mathrm{\Upphi }}\in \mathcal {R}^{m}\). The time derivative of constraint equations is equal to: \(\dot{\boldsymbol{\mathrm{\Upphi }}} = \boldsymbol{\mathrm{\Upphi }}_{ \boldsymbol{\mathrm{q}}}\dot{\boldsymbol{\mathrm{q}}}\), where \(\boldsymbol{\mathrm{\Upphi }}_{\boldsymbol{\mathrm{q}}}\in \mathcal {R}^{m \times n_{q}}\) stands for the constraint Jacobian matrix. The time derivative of generalized coordinates, \(\dot{\boldsymbol{\mathrm{q}}}\), consists of translational (Cartesian coordinates) and rotational (e.g., Euler angles or unit quaternions) components. Conversely, this paper will assume spatial velocities \(\boldsymbol{\mathrm{v}}\in \mathcal {R}^{n_{v}}\) as the primary variables used to formulate the equations of motion. The velocity of an arbitrary body \(A\) can be expressed as \(\boldsymbol{\mathrm{v}}_{A} = [\dot{\boldsymbol{\mathrm{r}}}_{A}^{T}, \, \boldsymbol{\mathrm{\upomega }}_{A}^{T}]^{T}\), where \(\dot{\boldsymbol{\mathrm{r}}}_{A} \in \mathcal{R}^{3}\) and \(\boldsymbol{\mathrm{\upomega }}_{A} \in \mathcal{R}^{3}\) refer to linear and angular velocity of a body, respectively. Moreover, spatial velocity is related to the time derivative of dependent coordinates via bidirectional, configuration-dependent map: , , which allows one to rewrite the derivative of the constraint equations in the following way:

$$ \dot{\boldsymbol{\mathrm{\Upphi }}} = \boldsymbol{\mathrm{D}}^{T} \boldsymbol{\mathrm{v}}= \boldsymbol{\mathrm{0}} . $$
(1)

Here, the matrix plays the role of the constraint Jacobian.

Let ℒ be the system Lagrangian function defined by \(\mathcal{L}(\boldsymbol{\mathrm{q}},\boldsymbol{\mathrm{v}})=T( \boldsymbol{\mathrm{q}},\boldsymbol{\mathrm{v}})-V( \boldsymbol{\mathrm{q}}) \), where \(T=\frac{1}{2}\boldsymbol{\mathrm{v}}^{T}\boldsymbol{\mathrm{M}} \boldsymbol{\mathrm{v}}\) and \(V\) are the kinetic and potential energy of a system, respectively, and let the symbol \(\boldsymbol{\mathrm{M}} \in \mathcal {R}^{n_{v} \times n_{v}}\) denote a mass matrix. Canonical momenta \(\boldsymbol{\mathrm{p}} \in \mathcal{R}^{n_{v}}\) conjugated with the velocities \(\boldsymbol{\mathrm{v}}\) are defined as:

$$ \boldsymbol{\mathrm{p}}=\left ( \frac{\partial \mathcal{L}}{\partial \boldsymbol{\mathrm{v}}}\right )^{T}= \boldsymbol{\mathrm{M}}\boldsymbol{\mathrm{v}}. $$
(2)

Subsequently, the Hamiltonian function of a multibody system can be expressed as \(\mathcal{H}=\boldsymbol{\mathrm{p}}^{T}\boldsymbol{\mathrm{v}}- \mathcal{L}\). Hamilton’s equations of motion for constrained multi-rigid-body system can, therefore, be formulated as follows:

$$ \boldsymbol{\mathrm{p}}=\boldsymbol{\mathrm{M}} \boldsymbol{\mathrm{v}},\quad \dot{\boldsymbol{\mathrm{p}}}= \boldsymbol{\mathrm{Q}}-\boldsymbol{\mathrm{D}} \boldsymbol{\mathrm{\uplambda }}, $$
(3)

where \(\boldsymbol{\mathrm{Q}}= \boldsymbol{\mathrm{Q}}_{ex} - \mathcal{H}_{ \boldsymbol{\mathrm{q}}}^{T}\) is a sum of external non-conservative and conservative forces, respectively. The term \(\boldsymbol{\mathrm{D}}\boldsymbol{\mathrm{\uplambda }}\) reflects constraint reaction forces, where the quantity \(\boldsymbol{\mathrm{\uplambda }}\in \mathcal{R}^{m}\) is a vector of Lagrange multipliers that represent constraint forces at joints distributed along the directions indicated by columns of the Jacobian matrix \(\boldsymbol{\mathrm{D}}^{T}\in \mathcal{R}^{m\times n_{v}}\).

Now, let us augment the Lagrangian function with a term explicitly enforcing the kinematic velocity constraints (1): \(\mathcal{L}^{*}=\mathcal{L}+\boldsymbol{\mathrm{\upsigma }}^{T} \dot{\boldsymbol{\mathrm{\Upphi }}}\). The quantity \(\boldsymbol{\mathrm{\upsigma }}\in \mathcal{R}^{m}\) represents a vector of \(m\) new Lagrange multipliers associated with the velocity level constraint equations. Based on this modification, we can define the augmented momenta in the following way:

$$ \boldsymbol{\mathrm{p}}^{*}=\left ( \frac{\partial \mathcal{L}^{*}}{\partial \boldsymbol{\mathrm{v}}} \right )^{T} =\boldsymbol{\mathrm{M}}\boldsymbol{\mathrm{v}}+ \boldsymbol{\mathrm{D}}\boldsymbol{\mathrm{\upsigma }} . $$
(4)

It can be easily shown that \(\dot{\boldsymbol{\mathrm{\upsigma }}}= \boldsymbol{\mathrm{\uplambda }}\). The physical interpretation of this relation is that \(\boldsymbol{\mathrm{\upsigma }}\) is a vector of constraint force impulses. Moreover, from the definition of the augmented momenta follows that \(\boldsymbol{\mathrm{p}}^{*} = \boldsymbol{\mathrm{p}} + \boldsymbol{\mathrm{D}}\boldsymbol{\mathrm{\upsigma }}\). By differentiating this relation, invoking the property \(\dot{\boldsymbol{\mathrm{\upsigma }}}= \boldsymbol{\mathrm{\uplambda }}\), and substituting it into Eq. (3), we come up with the following formula for the derivative of augmented momenta:

$$ \dot{\boldsymbol{\mathrm{p}}}^{*}=\boldsymbol{\mathrm{Q}}+ \dot{\boldsymbol{\mathrm{D}}}\boldsymbol{\mathrm{\upsigma }}. $$
(5)

Ultimately, equations (1), (4), and (5) can be collected to form a set of constrained Hamilton’s canonical equations of motion:

$$\begin{aligned} &\left [ \textstyle\begin{array}{c@{\quad}c} \boldsymbol{\mathrm{M}}(\boldsymbol{\mathrm{q}}) & \boldsymbol{\mathrm{D}}(\boldsymbol{\mathrm{q}}) \\ \boldsymbol{\mathrm{D}}^{T}(\boldsymbol{\mathrm{q}}) & \boldsymbol{\mathrm{0}} \end{array}\displaystyle \right ] \left [ \textstyle\begin{array}{c} \boldsymbol{\mathrm{v}} \\ \boldsymbol{\mathrm{\upsigma }} \end{array}\displaystyle \right ] = \left [ \textstyle\begin{array}{c} \boldsymbol{\mathrm{p}}^{*} \\ \boldsymbol{\mathrm{0}} \end{array}\displaystyle \right ] , \end{aligned}$$
(6a)
$$\begin{aligned} &\dot{\boldsymbol{\mathrm{p}}}^{*}=\boldsymbol{\mathrm{Q}}( \boldsymbol{\mathrm{q}}, \boldsymbol{\mathrm{v}}) + \dot{\boldsymbol{\mathrm{D}}}(\boldsymbol{\mathrm{q}}, \boldsymbol{\mathrm{v}}) \boldsymbol{\mathrm{\upsigma }} . \end{aligned}$$
(6b)

Equations (6a), (6b) can be integrated numerically starting from initial configuration described by position (\({\boldsymbol{\mathrm{q}}} \rvert _{t=0}\)) and momenta (\({\boldsymbol{\mathrm{p}}^{*}} \rvert _{t=0}\)) vectors. The initial momentum vector can be obtained directly from the prescribed velocity via Eq. (4) by substituting \({\boldsymbol{\mathrm{\upsigma }}} \rvert _{t=0} = \boldsymbol{\mathrm{0}}\).

Equations of motion (6a), (6b) constitute \(2n_{v}+m\) differential-algebraic equations of index two with position, momenta, and Lagrange multipliers as unknowns. According to the literature, this approach is often more efficient and numerically stable than the acceleration-based formulation, mainly due to the lowered differential index [10, 12, 13]. The formulation based on spatial velocities is preferred to the corresponding formulation presented in ref. [30] since it seamlessly integrates with the joint space formulation presented in the following subsection. This property will be even more useful when deriving the relationships between adjoint variables in Sect. 4.

2.2 Joint–space equations of motion

This section presents the derivation of joint-space Hamilton’s equations via projection method. Equations of motion for open–loop system are demonstrated first and further generalized to take into account closed-loop topologies. Two orthogonal subspaces are defined next, namely joint’s motion subspace \(\boldsymbol{\mathrm{H}}_{j} \in \mathcal{R}^{6\times n_{\mathit{dof}}^{j}}\) and constrained motion subspace \(\boldsymbol{\mathrm{D}}_{j} \in \mathcal{R}^{6\times 6-n_{\mathit{dof}}^{j}}\) that can be utilized to conveniently represent joints in a multibody system [31]. For example, to describe a spherical joint interconnecting two bodies, we would define these subspaces as:

$$ \boldsymbol{\mathrm{H}}_{j} = \begin{bmatrix} -\widetilde{\boldsymbol{\mathrm{s}}_{1c}} \\ \boldsymbol{\mathrm{1}}_{3 \times 3} \end{bmatrix} , \qquad \boldsymbol{\mathrm{D}}_{j} = \begin{bmatrix} \boldsymbol{\mathrm{1}}_{3\times 3} \\ -\widetilde{\boldsymbol{\mathrm{s}}_{1c}} , \end{bmatrix} $$

where \(\boldsymbol{\mathrm{s}}_{1c}\in \mathcal {R}^{3}\) denotes a vector starting from the joint’s origin to the body-fixed coordinate frame, typically a center of mass as shown in Fig. 1. The tilde operator transforms a vector to a skew-symmetric matrix, which, when multiplied by another vector, can be interpreted as a cross-product of two vectors. The relative spatial velocity of a single body can be described with the aid of the allowable motion subspace as: \(\boldsymbol{\mathrm{v}}_{A}^{\text{rel}} = \boldsymbol{\mathrm{H}}_{A} \boldsymbol{\mathrm{\upbeta }}_{A}\), where \(\boldsymbol{\mathrm{\upbeta }}_{A} \in \mathcal{R}^{n_{\beta}^{A}}\) is the joint velocity vector denoting relative velocity of body \(A\) with respect to the velocity of its parent body. Subsequently, one can show that there is a linear transformation between joint velocities \(\boldsymbol{\mathrm{\upbeta }}\) and spatial velocities \(\boldsymbol{\mathrm{v}}\), which leads to the following global expression [9, 31]:

$$ \boldsymbol{\mathrm{v}}= \boldsymbol{\mathrm{H}} \boldsymbol{\mathrm{\upbeta }}. $$
(7)

Here, both global and joint velocities are presented in a stacked vector format, whereas \(\boldsymbol{\mathrm{H}}(\boldsymbol{\mathrm{\upalpha }}) \in \mathcal {R}^{n_{v} \times k}\) (with subindex dropped) denotes a global allowable motion subspace. The matrix \(\boldsymbol{\mathrm{H}}\) is dependent on joint coordinates \(\boldsymbol{\mathrm{\upalpha }}\in \mathcal {R}^{n_{\upalpha}}\), whose derivative is uniquely related to joint velocities, similarly to the absolute-coordinate case: , . Nevertheless, in many practical situations, it is correct to assume that the joint velocity is equal to the derivative of joint coordinates, i.e. \(\boldsymbol{\mathrm{\upbeta }}= \dot{\boldsymbol{\mathrm{\upalpha }}}\). Let us pinpoint that Eq. (7) is the explicit representation of the constraints \(\dot{\boldsymbol{\mathrm{\Upphi }}} = \boldsymbol{\mathrm{0}}\) [32]. Since \(\boldsymbol{\mathrm{\upbeta }}\) is a vector of unconstrained variables for open-chain systems, the following relation also holds: \(\boldsymbol{\mathrm{D}}^{T}\boldsymbol{\mathrm{H}}= \boldsymbol{\mathrm{0}}\).

Fig. 1
figure 1

A multibody system described in joint coordinates

At this point, Hamilton’s equations of motion can be projected onto the global allowable motion subspace \(\boldsymbol{\mathrm{H}}\). Premultiplying Eq. (6a) by \(\boldsymbol{\mathrm{H}}^{T}\) while taking into account that \(\boldsymbol{\mathrm{H}}^{T} \boldsymbol{\mathrm{D}} \boldsymbol{\mathrm{\upsigma }}= \boldsymbol{\mathrm{0}}\) leads to the first set of canonical equations:

$$ \boldsymbol{\mathrm{H}}^{T} \boldsymbol{\mathrm{p}}^{*}= \boldsymbol{\mathrm{H}}^{T} \boldsymbol{\mathrm{M}} \boldsymbol{\mathrm{H}}\boldsymbol{\mathrm{\upbeta }}\quad \Rightarrow \quad \widehat{\boldsymbol{\mathrm{p}}} = \widehat{\boldsymbol{\mathrm{M}}} \boldsymbol{\mathrm{\upbeta }}, $$
(8)

where \(\widehat{\boldsymbol{\mathrm{p}}} \in \mathcal {R}^{k}\) denotes joint momenta in stacked vector format, and \(\widehat{\boldsymbol{\mathrm{M}}} \in \mathcal {R}^{k \times k}\) can be interpreted as joint-space mass matrix. Consequently, the time derivative of \(\widehat{\boldsymbol{\mathrm{p}}}\) can be written as: \(\dot{\widehat{\boldsymbol{\mathrm{p}}}} = \boldsymbol{\mathrm{H}}^{T} \dot{\boldsymbol{\mathrm{p}}}^{*}+ \dot{\boldsymbol{\mathrm{H}}}^{T} \boldsymbol{\mathrm{p}}^{*}\). Now, let us premultiply Eq. (6b) by \(\boldsymbol{\mathrm{H}}^{T}\) and add the term \(\dot{\boldsymbol{\mathrm{H}}}^{T} \boldsymbol{\mathrm{p}}^{*}\) to both sides of the resultant equation. This allows for the following calculation:

$$\begin{aligned} \begin{aligned} \boldsymbol{\mathrm{H}}^{T} \dot{\boldsymbol{\mathrm{p}}}^{*}+ \dot{\boldsymbol{\mathrm{H}}}^{T} \boldsymbol{\mathrm{p}}^{*}&= \boldsymbol{\mathrm{H}}^{T} \boldsymbol{\mathrm{Q}} + \boldsymbol{\mathrm{H}}^{T} \dot{\boldsymbol{\mathrm{D}}} \boldsymbol{\mathrm{\upsigma }}+ \dot{\boldsymbol{\mathrm{H}}}^{T} \boldsymbol{\mathrm{p}}^{*} \\ \dot{\widehat{\boldsymbol{\mathrm{p}}}} &= \boldsymbol{\mathrm{H}}^{T} \boldsymbol{\mathrm{Q}} + \boldsymbol{\mathrm{H}}^{T} \dot{\boldsymbol{\mathrm{D}}}\boldsymbol{\mathrm{\upsigma }}+ \dot{\boldsymbol{\mathrm{H}}}^{T} (\boldsymbol{\mathrm{M}} \boldsymbol{\mathrm{v}}+ \boldsymbol{\mathrm{D}} \boldsymbol{\mathrm{\upsigma }}) \\ &= \boldsymbol{\mathrm{H}}^{T} \boldsymbol{\mathrm{Q}} + \dot{\boldsymbol{\mathrm{H}}}^{T} \boldsymbol{\mathrm{M}} \boldsymbol{\mathrm{v}}+ (\boldsymbol{\mathrm{H}}^{T} \dot{\boldsymbol{\mathrm{D}}}+ \dot{\boldsymbol{\mathrm{H}}}^{T} \boldsymbol{\mathrm{D}}) \boldsymbol{\mathrm{\upsigma }}. \end{aligned} \end{aligned}$$
(9)

By recalling that \(\boldsymbol{\mathrm{H}}^{T} \dot{\boldsymbol{\mathrm{D}}}+ \dot{\boldsymbol{\mathrm{H}}}^{T} \boldsymbol{\mathrm{D}}= \frac{\mathrm{d}}{\mathrm{d}t}(\boldsymbol{\mathrm{H}}^{T} \boldsymbol{\mathrm{D}}^{T}) = \boldsymbol{\mathrm{0}}\), we come up with the following set of governing equations for open-chain systems:

$$\begin{gathered} \widehat{\boldsymbol{\mathrm{p}}} = \widehat{\boldsymbol{\mathrm{M}}} \boldsymbol{\mathrm{\upbeta }}, \end{gathered}$$
(10a)
$$\begin{gathered} \dot{\widehat{\boldsymbol{\mathrm{p}}}} = \boldsymbol{\mathrm{H}}^{T} \boldsymbol{\mathrm{Q}} + \dot{\boldsymbol{\mathrm{H}}}^{T} \boldsymbol{\mathrm{M}} \boldsymbol{\mathrm{H}} \boldsymbol{\mathrm{\upbeta }}. \end{gathered}$$
(10b)

Let us note that certain terms from global formulation, such as \(\boldsymbol{\mathrm{Q}}\) or \(\boldsymbol{\mathrm{M}}\), also appear in Eqs. (10a), (10b). Although we have defined them as dependent on absolute coordinates \(\boldsymbol{\mathrm{q}}\), \(\boldsymbol{\mathrm{v}}\), it is possible to reformulate these quantities to be dependent on joint variables. Furthermore, joint coordinates can be mapped onto a vector of absolute coordinates via unique, non-linear, possibly recursive relation, that can be symbolically expressed as \(\boldsymbol{\mathrm{q}} = \boldsymbol{\mathrm{g}} ( \boldsymbol{\mathrm{\upalpha }})\).

The governing equations (10a), (10b) are applicable only to open-loop MBS. If there is one or more kinematic closed-loops in a system, the joint coordinates become dependent. A cut-joint method is usually employed to transform a closed-loop topology into an open-loop mechanism [33]. Loop-closure constraint equations \(\boldsymbol{\mathrm{\Uptheta }}= \boldsymbol{\mathrm{0}}\), \(\boldsymbol{\mathrm{\Uptheta }}\in \mathcal{R}^{m_{c}}\) are imposed to reflect that the joint coordinates are dependent. The time derivative of the loop constraints can be written as: \(\dot{\boldsymbol{\mathrm{\Uptheta }}} = \boldsymbol{\mathrm{\Upgamma }}^{T} \boldsymbol{\mathrm{\upbeta }}= \boldsymbol{\mathrm{0}}\), where \(\boldsymbol{\mathrm{\Upgamma }}^{T} \in \mathcal{R}^{m_{c} \times n_{ \beta}}\) represents the constraint Jacobian matrix. It can be shown that constrained form of equations of motion reads as:

$$\begin{gathered} \widehat{\boldsymbol{\mathrm{p}}} = \widehat{\boldsymbol{\mathrm{M}}} \boldsymbol{\mathrm{\upbeta }}+ \boldsymbol{\mathrm{\Upgamma }} \boldsymbol{\mathrm{\upsigma }}_{c}, \end{gathered}$$
(11a)
$$\begin{gathered} \dot{\widehat{\boldsymbol{\mathrm{p}}}} = \boldsymbol{\mathrm{H}}^{T} \boldsymbol{\mathrm{Q}} + \dot{\boldsymbol{\mathrm{H}}}^{T} \boldsymbol{\mathrm{M}} \boldsymbol{\mathrm{H}} \boldsymbol{\mathrm{\upbeta }}+ \dot{\boldsymbol{\mathrm{\Upgamma }}} \boldsymbol{\mathrm{\upsigma }}_{c} , \end{gathered}$$
(11b)

where \(\boldsymbol{\mathrm{\upsigma }}_{c} \in \mathcal{R}^{m_{c}}\) is a vector of constraint impulse loads associated with loop-closure constraints.

3 Adjoint sensitivity analysis

3.1 Introduction

A prevalent task arising in the field of optimal design or control of MBS is to minimize the following performance index:

$$ J = \int _{0}^{t_{f}} h \big(t, \boldsymbol{\mathrm{q}}, \boldsymbol{\mathrm{v}}, \boldsymbol{\mathrm{b}}) \, \mathrm{d}t+ {S \big(\boldsymbol{\mathrm{q}}, \boldsymbol{\mathrm{v}}\big)} \rvert _{t_{f}} , $$
(12)

where the integrand, denoted by \(h\), represents the expression to be minimized in a fixed time horizon \(t_{f}\), and \(\boldsymbol{\mathrm{b}}\in \mathcal {R}^{n_{b}}\) denotes a vector of design parameters or a set of discretized input functions. The second term in Eq. (12) is a terminal cost, suitable for prescribing a particular state of the system at the terminal time \(t_{f}\), which is fixed.

The most efficient local optimization algorithms that are able to minimize the performance measure (12) rely heavily on the gradient of the performance measure. This computation phase is called the sensitivity analysis, and it usually constitutes a performance bottleneck. Many approaches are available to cope with the problem of efficient gradient computation. The most straightforward is a purely numerical approach that treats the underlying dynamics as a black box and computes the gradient via finite differences. However, the list of the method’s drawbacks is relatively long: from high computational burden to evaluate a single derivative, through undesirable scaling in the case of multiplicity of design variables, to ambiguity in the choice of the perturbation step. Conversely, relying on the mathematical model of the dynamic equations is possible. This family of methods is significantly more difficult to implement; however, it yields the most accurate and computationally efficient results. A direct approach to the sensitivity analysis, known as direct differentiation method, employs a chain rule of differentiation to the performance measure (12):

$$ \nabla _{\boldsymbol{\mathrm{b}}} J=\int _{0}^{t_{f}} \big(h_{ \boldsymbol{\mathrm{q}}} \boldsymbol{\mathrm{q}}_{ \boldsymbol{\mathrm{b}}} + h_{\boldsymbol{\mathrm{v}}} \boldsymbol{\mathrm{v}}_{\boldsymbol{\mathrm{b}}} + h_{ \boldsymbol{\mathrm{b}}} \big) \, \mathrm{d}t+ {S_{ \boldsymbol{\mathrm{q}}} \boldsymbol{\mathrm{q}}_{ \boldsymbol{\mathrm{b}}}} \rvert _{t_{f}} + {S_{ \boldsymbol{\mathrm{v}}} \boldsymbol{\mathrm{v}}_{ \boldsymbol{\mathrm{b}}}} \rvert _{t_{f}} . $$
(13)

The idea is to compute the implicit state derivatives \(\boldsymbol{\mathrm{q}}_{\boldsymbol{\mathrm{b}}}\), \(\boldsymbol{\mathrm{v}}_{\boldsymbol{\mathrm{b}}}\) by differentiating the underlying equation of motion (6a), (6b) with respect to the design or control variables \(\boldsymbol{\mathrm{b}}\). This method proves to be reliable and efficient when the number of constraints is relatively large [2, 3] since the state derivatives can be used for each of the constraint equations. Nevertheless, this approach requires solving \(\dim (\boldsymbol{\mathrm{b}})=n_{b}\) sub-problems of size comparable to the underlying EOM. Concurrently, it is more often the case in the field of multibody dynamics that the number of constraints is relatively low compared with the dimension of \(\boldsymbol{\mathrm{b}}\). The adjoint method focuses on solving different equations, thus avoiding the cumbersome computation of the state derivatives. Therefore, its efficiency is not affected by the number of design/control variables, and the total cost of computing the gradient is only proportional to the size of the underlying dynamic problem. This property makes the adjoint method the most viable option for efficient and reliable gradient computation, as will be briefly presented in Sect. 3.2.

3.2 The adjoint method

Equations of motion must be fulfilled throughout the optimization process, hence it is valid to treat them as constraints imposed on the design variables \(\boldsymbol{\mathrm{b}}\). The key principle of the adjoint method is the inclusion of these constraints to the generic cost functional (12). For brevity, let us consider a set of DAEs describing dynamics of a multibody system, such as Eqs. (6a), (6b), in the following implicit form: \(\boldsymbol{\mathrm{F}}(\boldsymbol{\mathrm{b}}, \boldsymbol{\mathrm{y}}, \dot{\boldsymbol{\mathrm{y}}}, \boldsymbol{\mathrm{\upsigma }})=\boldsymbol{\mathrm{0}}\). The symbol \(\boldsymbol{\mathrm{y}}\) represents a vector of state variables of appropriate size. Consequently, the performance measure (12) may be extended to include the dynamic equation of motion in the following way:

$$\begin{aligned} \overline{J}=\int _{0}^{t_{f}}\left [ h(\boldsymbol{\mathrm{b}}, \boldsymbol{\mathrm{y}}, \dot{\boldsymbol{\mathrm{y}}}) + \boldsymbol{\mathrm{w}}^{T} \boldsymbol{\mathrm{F}}( \boldsymbol{\mathrm{b}}, \boldsymbol{\mathrm{y}}, \dot{\boldsymbol{\mathrm{y}}}, \boldsymbol{\mathrm{\upsigma }}) \right ] \, \mathrm{d}t+ { S(\boldsymbol{\mathrm{y}}, \dot{\boldsymbol{\mathrm{y}}}) } \rvert _{t_{f}} , \end{aligned}$$
(14)

where \(\boldsymbol{\mathrm{w}}(t) \in \mathcal {R}^{2n_{v}+m}\) is a vector of arbitrary, time-dependent multipliers, known as adjoint, or costate, variables. Let us investigate the variation of the extended performance measure (14):

$$\begin{aligned} \begin{aligned} \updelta \overline{J}=\int _{0}^{t_{f}}&\big[ h_{ \boldsymbol{\mathrm{b}}} \updelta \boldsymbol{\mathrm{b}}+ h_{ \boldsymbol{\mathrm{y}}}\updelta {\boldsymbol{\mathrm{y}}} + h_{ \dot{\boldsymbol{\mathrm{y}}}}\updelta {\dot{\boldsymbol{\mathrm{y}}}} + \boldsymbol{\mathrm{w}}^{T} \boldsymbol{\mathrm{F}}_{ \boldsymbol{\mathrm{y}}} \updelta \boldsymbol{\mathrm{y}} + \boldsymbol{\mathrm{w}}^{T} \boldsymbol{\mathrm{F}}_{ \dot{\boldsymbol{\mathrm{y}}}} \updelta \dot{\boldsymbol{\mathrm{y}}} \\ &+ \boldsymbol{\mathrm{w}}^{T} \boldsymbol{\mathrm{F}}_{ \boldsymbol{\mathrm{b}}} \updelta \boldsymbol{\mathrm{b}}+ \boldsymbol{\mathrm{w}}^{T} \boldsymbol{\mathrm{F}}_{ \boldsymbol{\mathrm{\upsigma }}} \updelta \boldsymbol{\mathrm{\upsigma }}\big] \, \mathrm{d}t+ {S_{ \boldsymbol{\mathrm{y}}} \updelta \boldsymbol{\mathrm{y}}} \rvert _{t_{f}} + {S_{\dot{\boldsymbol{\mathrm{y}}}} \updelta \dot{\boldsymbol{\mathrm{y}}}} \rvert _{t_{f}} . \end{aligned} \end{aligned}$$
(15)

The total variation \(\updelta \overline{J}\) is a result of variations \(\updelta \boldsymbol{\mathrm{y}}\), \(\updelta \boldsymbol{\mathrm{\upsigma }}\), \(\updelta \boldsymbol{\mathrm{b}}\), respectively. The variation \(\updelta \dot{\boldsymbol{\mathrm{y}}}\) that appears under the integral can be eliminated by integrating appropriate expression by parts:

$$\begin{aligned} \begin{aligned} \int _{0}^{t_{f}}\Big[ \big(h_{\dot{\boldsymbol{\mathrm{y}}}} + \boldsymbol{\mathrm{w}}^{T} \boldsymbol{\mathrm{F}}_{ \dot{\boldsymbol{\mathrm{y}}}} \big) \updelta \dot{\boldsymbol{\mathrm{y}}} \Big] \, \mathrm{d}t= &-\int _{0}^{t_{f}} \Big[ \big(\dot{h}_{\dot{\boldsymbol{\mathrm{y}}}} + \frac{\mathrm{d}}{\mathrm{d}t}{(\boldsymbol{\mathrm{w}}^{T} \boldsymbol{\mathrm{F}}_{\dot{\boldsymbol{\mathrm{y}}}})} \big) \updelta \boldsymbol{\mathrm{y}} \Big] \, \mathrm{d}t \\ &+ { h_{\dot{\boldsymbol{\mathrm{y}}}} } \rvert _{t_{f}} + { \boldsymbol{\mathrm{w}}^{T} \boldsymbol{\mathrm{F}}_{ \dot{\boldsymbol{\mathrm{y}}}} \updelta \boldsymbol{\mathrm{y}}} \rvert _{t_{f}} . \end{aligned} \end{aligned}$$
(16)

Only final time component is taken into account outside of the integral in Eq. (16) since \({\updelta \boldsymbol{\mathrm{y}}} \rvert _{t=0} = \boldsymbol{\mathrm{0}}\) when initial conditions are explicitly prescribed. Upon substituting equation (16) into Eq. (15), we come up with the following expression:

$$\begin{aligned} \begin{aligned} \updelta \overline{J}=\int _{0}^{t_{f}}&\Big[ h_{ \boldsymbol{\mathrm{b}}} \updelta \boldsymbol{\mathrm{b}}+ \big(h_{ \boldsymbol{\mathrm{y}}} - \dot{h}_{\dot{\boldsymbol{\mathrm{y}}}} + \boldsymbol{\mathrm{w}}^{T} \boldsymbol{\mathrm{F}}_{ \boldsymbol{\mathrm{y}}} - \frac{\mathrm{d}}{\mathrm{d}t}{( \boldsymbol{\mathrm{w}}^{T} \boldsymbol{\mathrm{F}}_{ \dot{\boldsymbol{\mathrm{y}}}})} \big) \updelta { \boldsymbol{\mathrm{y}}} + \boldsymbol{\mathrm{w}}^{T} \boldsymbol{\mathrm{F}}_{\boldsymbol{\mathrm{b}}} \updelta \boldsymbol{\mathrm{b}} \\ &+ \boldsymbol{\mathrm{w}}^{T} \boldsymbol{\mathrm{F}}_{ \boldsymbol{\mathrm{\upsigma }}} \updelta \boldsymbol{\mathrm{\upsigma }}\Big] \, \mathrm{d}t+ { \big( S_{ \boldsymbol{\mathrm{y}}} + h_{\dot{\boldsymbol{\mathrm{y}}}} + \boldsymbol{\mathrm{w}}^{T} \boldsymbol{\mathrm{F}}_{ \dot{\boldsymbol{\mathrm{y}}}} \big) \updelta \boldsymbol{\mathrm{y}} } \rvert _{t_{f}} + {\boldsymbol{\mathrm{S}}_{ \dot{\boldsymbol{\mathrm{y}}}} \updelta \dot{\boldsymbol{\mathrm{y}}}} \rvert _{t_{f}}. \end{aligned} \end{aligned}$$
(17)

Adjoint variables have at this point arbitrary values. The goal of the adjoint method is to pick such a set of adjoint variables that would simplify equation (17) to the expression involving solely variations of design or control variables \(\updelta \boldsymbol{\mathrm{b}}\):

$$ \updelta \overline{J}=\int _{0}^{t_{f}} \big(h_{ \boldsymbol{\mathrm{b}}} + \boldsymbol{\mathrm{w}}^{T} \boldsymbol{\mathrm{F}}_{\boldsymbol{\mathrm{b}}} \big) \updelta \boldsymbol{\mathrm{b}}\, \mathrm{d}t, $$
(18)

Equation (18) exposes the gradient of the performance measure and allows us to obtain a unique expression for its computation by exploiting the adjoint variables:

  • \(\nabla _{\boldsymbol{\mathrm{b}}} \overline{J} = \int _{0}^{t_{f}} \big(h_{\boldsymbol{\mathrm{b}}} + \boldsymbol{\mathrm{w}}^{T} \boldsymbol{\mathrm{F}}_{ \boldsymbol{\mathrm{b}}} \big) \, \mathrm{d}t\) – when \(\boldsymbol{\mathrm{b}}\) represents design variables

  • – when \(\boldsymbol{\mathrm{b}}\) denotes discretized control input signals, and is the discretization time-step.

To find a set of design variables or control functions that produces a stationary value of the cost functional \(J\), the adjoint variables are required to fulfill the following set of necessary conditions:

$$\begin{gathered} \boldsymbol{\mathrm{F}}_{\dot{\boldsymbol{\mathrm{y}}}}^{T} \dot{\boldsymbol{\mathrm{w}}} = h_{\boldsymbol{\mathrm{y}}}^{T} - \dot{h}_{\dot{\boldsymbol{\mathrm{y}}}}^{T} + ( \boldsymbol{\mathrm{F}}_{\boldsymbol{\mathrm{y}}}^{T} - \dot{\boldsymbol{\mathrm{F}}}_{\dot{\boldsymbol{\mathrm{y}}}}^{T}) \boldsymbol{\mathrm{w}} , \end{gathered}$$
(19a)
$$\begin{gathered} \boldsymbol{\mathrm{F}}_{\boldsymbol{\mathrm{\upsigma }}}^{T} \boldsymbol{\mathrm{w}} = \boldsymbol{\mathrm{0}} , \end{gathered}$$
(19b)
$$\begin{gathered} {\boldsymbol{\mathrm{F}}_{\dot{\boldsymbol{\mathrm{y}}}}^{T} \boldsymbol{\mathrm{w}}} \rvert _{t_{f}} = -S_{ \boldsymbol{\mathrm{y}}}^{T} - {h_{\dot{\boldsymbol{\mathrm{y}}}}^{T}} \rvert _{t_{f}} , \end{gathered}$$
(19c)
$$\begin{gathered} S_{\dot{\boldsymbol{\mathrm{y}}}}^{T} = \boldsymbol{\mathrm{0}} , \end{gathered}$$
(19d)

which is a general formula for the adjoint system. Equations (19a), (19b) constitute a set of DAEs that must be solved backward in time from the boundary conditions prescribed by Eqs. (19c), (19d). Equation (19d) suggests that the terminal cost \(S\) does not depend on the time derivative \(\dot{\boldsymbol{\mathrm{y}}}\), which is against the assumption presented in Eq. (14). This issue can be easily circumvented; however, it involves introducing additional components that would obscure the main point presented herein. For clarity, the details on how one can treat the boundary conditions of the adjoint system are presented in Appendix A.

3.3 Adjoint in redundant set of variables

Now, let us derive the necessary conditions for the case when a rigid multibody system is described by a set of constrained Hamilton’s equations. Rewriting Eq. (14) in a way that explicitly enforces the dynamic equations (6a), (6b) yields:

$$\begin{aligned} \overline{J}=\int _{0}^{t_{f}}\left [ h + \boldsymbol{\mathrm{\upeta }}^{T}(\boldsymbol{\mathrm{p}}^{*}- \boldsymbol{\mathrm{M}} \boldsymbol{\mathrm{v}}- \boldsymbol{\mathrm{D}}\boldsymbol{\mathrm{\upsigma }}) + \boldsymbol{\mathrm{\upxi }}^{T}(\dot{\boldsymbol{\mathrm{p}}}^{*}- \boldsymbol{\mathrm{Q}}- \dot{\boldsymbol{\mathrm{D}}} \boldsymbol{\mathrm{\upsigma }}) -\boldsymbol{\mathrm{\upmu }}^{T} \dot{\boldsymbol{\mathrm{\Upphi }}} \right ] \, \mathrm{d}t+ {S} \rvert _{t_{f}} . \end{aligned}$$
(20)

Here, the quantities \(\boldsymbol{\mathrm{\upeta }}=\boldsymbol{\mathrm{\upeta }}(t) \in \mathcal{R}^{n_{v}}\), \(\boldsymbol{\mathrm{\upxi }}=\boldsymbol{\mathrm{\upxi }}(t) \in \mathcal{R}^{n_{v}}\), and \(\boldsymbol{\mathrm{\upmu }}=\boldsymbol{\mathrm{\upmu }}(t) \in \mathcal{R}^{m}\) constitute the entries of the adjoint variables vector \(\boldsymbol{\mathrm{w}}\) defined in Sect. 3.2. Splitting the general dynamics formula presented in Eq. (14) into more explicit equations, such as constrained Hamilton’s EOM, will grant us more detailed insight into the properties of the resultant adjoint system; specifically, the relations that arise between absolute and joint-space adjoint variables. Applying the procedure described by Eqs. (14)–(19d) to the extended performance measure (20) yields the following set of equations:

$$\begin{gathered} \boldsymbol{\mathrm{\upeta }}-\dot{\boldsymbol{\mathrm{\upxi }}}= \boldsymbol{\mathrm{0}} , \end{gathered}$$
(21a)
$$\begin{gathered} \boldsymbol{\mathrm{M}}\dot{\boldsymbol{\mathrm{\upeta }}}+ \boldsymbol{\mathrm{D}}\dot{\boldsymbol{\mathrm{\upmu }}}+ \boldsymbol{\mathrm{A}}\dot{\boldsymbol{\mathrm{\upxi }}}+ \boldsymbol{\mathrm{r}}=\boldsymbol{\mathrm{0}} , \end{gathered}$$
(21b)
$$\begin{gathered} \boldsymbol{\mathrm{D}}^{T}\boldsymbol{\mathrm{\upeta }}+ \dot{\boldsymbol{\mathrm{D}}}^{T}\boldsymbol{\mathrm{\upxi }}= \boldsymbol{\mathrm{0}} . \end{gathered}$$
(21c)

The coefficients \(\boldsymbol{\mathrm{A}}(\boldsymbol{\mathrm{q}}, \boldsymbol{\mathrm{v}}, \boldsymbol{\mathrm{\upsigma }})\) and \(\boldsymbol{\mathrm{r}}(\boldsymbol{\mathrm{q}}, \boldsymbol{\mathrm{v}}, \dot{\boldsymbol{\mathrm{v}}}, \boldsymbol{\mathrm{\upsigma }}, \boldsymbol{\mathrm{\upxi }}, \boldsymbol{\mathrm{\upeta }})\) are quantities that raise directly from calculations presented in Sect. 3.2. The detailed derivation, as well as the formulas for \(\boldsymbol{\mathrm{A}}\) and \(\boldsymbol{\mathrm{r}}\) (cf. Eq. (B.9)), is presented in Appendix B. Equations (21a) and (21b) follow directly from Eq. (19a), while expression (21c) is implied by Eq. (19b). Additionally, Eqs. (19c) or (A.2) provide a set of boundary conditions for the adjoint multipliers \(\boldsymbol{\mathrm{\upeta }}(t_{f})\), \(\boldsymbol{\mathrm{\upxi }}(t_{f})\), \(\boldsymbol{\mathrm{\upmu }}(t_{f})\) at the fixed, terminal time. The formula for these quantities is presented in Appendix A. The system (21a)–(21c) constitutes a set of first order, index–2, linear DAEs with unknown adjoint variables. One of the objectives of this paper is to reduce the adjoint system of equations to a minimal set by introducing a new set of adjoint variables that closely resemble joint coordinates used in the equations of motion. To this end, a significant computational advantage can be gained by writing Eqs. (21a)–(21c) as a set of ordinary-differential equations and reducing their differentiation index. The substitution of \(\dot{\boldsymbol{\mathrm{\upxi }}}\) from Eq. (21a) into a time derivative of Eq. (21c) and into Eq. (21b) yields the following system of equations:

$$ \dot{\boldsymbol{\mathrm{\upxi }}}=\boldsymbol{\mathrm{\upeta }}. $$
(22a)
$$ \begin{aligned} &\left [ \textstyle\begin{array}{c@{\quad}c} \boldsymbol{\mathrm{M}} & \boldsymbol{\mathrm{D}} \\ \boldsymbol{\mathrm{D}}^{T}& \boldsymbol{\mathrm{0}} \end{array}\displaystyle \right ] \left [ \textstyle\begin{array}{c} \dot{\boldsymbol{\mathrm{\upeta }}} \\ \dot{\boldsymbol{\mathrm{\upmu }}} \end{array}\displaystyle \right ] = \left [ \textstyle\begin{array}{c} - \boldsymbol{\mathrm{r}} - \boldsymbol{\mathrm{A}} \boldsymbol{\mathrm{\upeta }} \\ -2\dot{\boldsymbol{\mathrm{D}}}^{T}\boldsymbol{\mathrm{\upeta }}- \ddot{\boldsymbol{\mathrm{D}}}\boldsymbol{\mathrm{\upxi }} \end{array}\displaystyle \right ] = \left [ \textstyle\begin{array}{c} -\boldsymbol{\mathrm{r}}_{A} \\ -\boldsymbol{\mathrm{r}}_{B} \end{array}\displaystyle \right ] \end{aligned} . $$
(22b)

The coefficient matrix on the LHS of equation (22b) is the same as in Eq. (6a). Equations (22a), (22b) must be integrated backward in time from known final conditions, i.e., \(\boldsymbol{\mathrm{\upeta }}(t_{f})\), \(\boldsymbol{\mathrm{\upxi }}(t_{f})\), \(\boldsymbol{\mathrm{\upmu }}(t_{f})\) defined in equations (A.4a), (A.4c). A more detailed discussion on the approach shown herein is presented in reference [30].

4 Independent adjoint variables

The goal of this section is to show the analogies that exist between constrained Hamiltonian formulation and necessary conditions for the minimization of cost functional subjected to differential-algebraic equations. As a result, a novel concept is demonstrated that allows one to introduce a set of independent adjoint variables. The key concept of this derivation lies in the relation between absolute-coordinate and joint-coordinate formulations of the equations of motion. A closer look at these relationships allows one to define a set of independent adjoint co-states. The unknown variables \(\boldsymbol{\mathrm{\upeta }}\), \(\boldsymbol{\mathrm{\upxi }}\) introduced in equations (21a)–(21c) are dependent since Eqs. (21a)–(21c) is the adjoint system to the DAE (6a), (6b). Therefore, these quantities are referred to as absolute (i.e., constrained) adjoint variables. Conversely, the set of independent adjoint variables \(\hat{\boldsymbol{\mathrm{\upeta }}}\in \mathcal {R}^{n_{\upbeta}}\) and \(\hat{\boldsymbol{\mathrm{\upxi }}}\in \mathcal {R}^{n_{\upbeta}}\) analogous to their dependent counterparts will be referred to as joint–space adjoint variables.

The joint-space dynamic equations (10a), (10b) are derived by projecting the EOM (6a), (6b) onto the global motion subspace \(\boldsymbol{\mathrm{H}}\). Let us rewrite both sets of equations in the following way:

$$\begin{aligned} \boldsymbol{\mathrm{f}} &\equiv \boldsymbol{\mathrm{\boldsymbol{\mathrm{p}}^{*}}}- \boldsymbol{\mathrm{M}} \boldsymbol{\mathrm{v}}- \boldsymbol{\mathrm{D}}\boldsymbol{\mathrm{\upsigma }}= \boldsymbol{\mathrm{0}} , \\ \dot{\boldsymbol{\mathrm{f}}} &\equiv \dot{\boldsymbol{\mathrm{p}}}^{*}- \boldsymbol{\mathrm{Q}}- \dot{\boldsymbol{\mathrm{D}}}\boldsymbol{\mathrm{\upsigma }}= \boldsymbol{\mathrm{0}} . \end{aligned}$$
(23)
$$\begin{aligned} \boldsymbol{\mathrm{\upvarphi }} &\equiv \widehat{\boldsymbol{\mathrm{p}}} - \widehat{\boldsymbol{\mathrm{M}}} \boldsymbol{\mathrm{\upbeta }}= \boldsymbol{\mathrm{0}} , \\ \dot{\boldsymbol{\mathrm{\upvarphi }}} &\equiv \dot{\widehat{\boldsymbol{\mathrm{p}}}} - \boldsymbol{\mathrm{H}}^{T} \boldsymbol{\mathrm{Q}}- \dot{\boldsymbol{\mathrm{H}}}^{T} \boldsymbol{\mathrm{M}} \boldsymbol{\mathrm{H}} \boldsymbol{\mathrm{\upbeta }}= \boldsymbol{\mathrm{0}} . \end{aligned}$$
(24)

The relation between the system of equations (23) and the set in (24) can be captured as follows:

$$ \boldsymbol{\mathrm{\upvarphi }} = \boldsymbol{\mathrm{H}}^{T} \boldsymbol{\mathrm{f}} \quad \Rightarrow \quad \dot{\boldsymbol{\mathrm{\upvarphi }}} = \boldsymbol{\mathrm{H}}^{T} \dot{\boldsymbol{\mathrm{f}}} + \dot{\boldsymbol{\mathrm{H}}}^{T} \boldsymbol{\mathrm{f}} . $$
(25)

Consequently, the performance measure may be transformed in the desired way to come up with a piece of new useful information. Instead of augmenting the cost functional (12) with the absolute-coordinate equations (23), we utilize the joint-space equations (24) and invoke relations (25) to come up with:

$$\begin{aligned} \begin{aligned} {\overline{J}} &= \int _{0}^{t_{f}} \Big[ h + \hat{\boldsymbol{\mathrm{\upeta }}}^{T} \boldsymbol{\mathrm{\upvarphi }} + \hat{\boldsymbol{\mathrm{\upxi }}}^{T} \dot{\boldsymbol{\mathrm{\upvarphi }}} \Big] \, \mathrm{d}t+ S \\ &= \int _{0}^{t_{f}} \Big[ h + \underbrace{\big(\hat{\boldsymbol{\mathrm{\upeta }}}^{T} \boldsymbol{\mathrm{H}}^{T} + \hat{\boldsymbol{\mathrm{\upxi }}}^{T} \dot{\boldsymbol{\mathrm{H}}}^{T} \big)}_{ \boldsymbol{\mathrm{\upeta }}^{T}} \boldsymbol{\mathrm{f}} + \underbrace{\big(\hat{\boldsymbol{\mathrm{\upxi }}}^{T} \boldsymbol{\mathrm{H}}^{T}\big)}_{ \boldsymbol{\mathrm{\upxi }}^{T}} \dot{\boldsymbol{\mathrm{f}}} \Big] \, \mathrm{d}t+ S . \end{aligned} \end{aligned}$$
(26)

The variables \(\hat{\boldsymbol{\mathrm{\upeta }}}(t) \in \mathcal {R}^{n_{\upbeta}}\) and \(\hat{\boldsymbol{\mathrm{\upxi }}}(t) \in \mathcal {R}^{n_{\upbeta}}\) denote a new set of independent adjoint variables that reside in the joint-space \(\boldsymbol{\mathrm{H}}\). By comparing equation (26) with Eq. (20), we establish the relations between joint- and absolute-coordinate co-states:

$$\begin{aligned} \boldsymbol{\mathrm{\upxi }}& = \boldsymbol{\mathrm{H}} \hat{\boldsymbol{\mathrm{\upxi }}}, \end{aligned}$$
(27a)
$$\begin{aligned} \boldsymbol{\mathrm{\upeta }}& = \boldsymbol{\mathrm{H}} \hat{\boldsymbol{\mathrm{\upeta }}}+ \dot{\boldsymbol{\mathrm{H}}} \hat{\boldsymbol{\mathrm{\upxi }}}. \end{aligned}$$
(27b)

The system of equations (27a), (27b) suggests that \(\hat{\boldsymbol{\mathrm{\upeta }}}= \dot{\hat{\boldsymbol{\mathrm{\upxi }}}}\). In fact, this is the joint-space counterpart of the terse relation \(\boldsymbol{\mathrm{\upeta }}= \dot{\boldsymbol{\mathrm{\upxi }}}\) (cf. Eq. (22a)). The simplicity of these relations comes specifically from a virtue of the Hamiltonian formulation since both momenta \(\boldsymbol{\mathrm{p}}^{*}\), \(\hat{\boldsymbol{\mathrm{p}}}\) and their derivatives are not premultiplied by any state–dependent coefficients. Moreover, the second equation in (23), (24) is the derivative of the first one, which is not the case, e.g., in the Lagrangian formulation. Ultimately, let us insert Eq. (21a) into Eq. (21c) and integrate the outcome by parts to get:

D T ξ=0.
(28)

The expression (28) can be interpreted as the implicit velocity–level constraint equations imposed on the adjoint variable \(\boldsymbol{\mathrm{\upxi }}\). Similarly, equation (27a) represents the same kinematic constraint formulated explicitly. Ultimately, equations (27a) and (28) reveal an interesting property of the adjoint variable \(\boldsymbol{\mathrm{\upxi }}\). Apparently, this quantity is somewhat analogous to the spatial velocity vector \(\boldsymbol{\mathrm{v}}\). The relations are detailed in Table 1.

Table 1 Analogies between spatial velocity vector \(\boldsymbol{\mathrm{v}}\in \mathcal{R}^{n_{v}}\) and the adjoint variable \(\boldsymbol{\mathrm{\upxi }}\in \mathcal{R}^{n_{\upbeta}}\)

Let us now investigate algebraic constraints imposed on the physical and adjoint systems. Table 2 presents a detailed comparison of these quantities. Specifically, one can see that the adjoint constraints are shifted toward the higher-order derivative compared to the geometric constraints imposed on a multibody system. Consequently, index–1 formulation of the adjoint system (22a), (22b) involves jerk-level constraints instead of velocity-level constraints, which is the case for EOM (6a), (6b). Equations appearing in Table 2 represent implicit characterization of the constraints. An equivalent form can be obtained with the aid of the motion subspace \(\boldsymbol{\mathrm{H}}\) whose components specify the allowable velocity range space in \(\mathcal{R}^{6}\). For example, a set of equations (27a), (27b) describes explicit velocity- and acceleration-level constraints imposed on the absolute adjoint variables. Subsequently, the differentiation of Eq. (27b) yields the explicit form of constraints that tie up redundant and independent sets of adjoint variables:

$$ \dot{\boldsymbol{\mathrm{\upeta }}} = \boldsymbol{\mathrm{H}} \dot{\hat{\boldsymbol{\mathrm{\upeta }}}} + 2 \dot{\boldsymbol{\mathrm{H}}}\hat{\boldsymbol{\mathrm{\upeta }}}+ \ddot{{\boldsymbol{\mathrm{H}}}}\hat{\boldsymbol{\mathrm{\upxi }}}. $$
(29)

Once we plug Eq. (29) into (22b) and premultiply both sides by the term \(\boldsymbol{\mathrm{H}}^{T}\), it is possible to come up with the joint-space adjoint equations:

$$\begin{aligned} \dot{\hat{\boldsymbol{\mathrm{\upxi }}}} &= \hat{\boldsymbol{\mathrm{\upeta }}}, \end{aligned}$$
(30a)
$$\begin{aligned} \Big( \boldsymbol{\mathrm{H}}^{T} \boldsymbol{\mathrm{M}} \boldsymbol{\mathrm{H}}\Big) \dot{\hat{\boldsymbol{\mathrm{\upeta }}}} = - \boldsymbol{\mathrm{H}}^{T} \Big( \boldsymbol{\mathrm{r}}_{A} + 2 \boldsymbol{\mathrm{M}} \dot{\boldsymbol{\mathrm{H}}}\hat{\boldsymbol{\mathrm{\upeta }}}+ \boldsymbol{\mathrm{M}} \ddot{{\boldsymbol{\mathrm{H}}}} \hat{\boldsymbol{\mathrm{\upxi }}}\Big) \quad \Rightarrow \quad \widehat{\boldsymbol{\mathrm{M}}} \dot{\hat{\boldsymbol{\mathrm{\upeta }}}} &= -\boldsymbol{\mathrm{r}}_{J} . \end{aligned}$$
(30b)

A system of equations (30a), (30b) may be integrated backward in time for the unknown adjoint multipliers \(\hat{\boldsymbol{\mathrm{\upeta }}}\) and \(\hat{\boldsymbol{\mathrm{\upxi }}}\). The initial values can be calculated by finding the values of absolute-coordinate adjoint variables (cf. Eqs. (A.4a), (A.4c)) and projecting them onto the subspace \(\boldsymbol{\mathrm{H}}\):

$$ \big( \boldsymbol{\mathrm{H}}^{T} \boldsymbol{\mathrm{H}}\big) \hat{\boldsymbol{\mathrm{\upxi }}} \rvert _{t=t_{f}} = { \boldsymbol{\mathrm{H}}^{T} \boldsymbol{\mathrm{\upxi }}} \rvert _{t=t_{f}} , \quad { \big( \boldsymbol{\mathrm{H}}^{T} \boldsymbol{\mathrm{H}} \big) \hat{\boldsymbol{\mathrm{\upeta }}}} \rvert _{t=t_{f}} = { \boldsymbol{\mathrm{H}}^{T} \big( \boldsymbol{\mathrm{\upeta }}- \dot{\boldsymbol{\mathrm{H}}}\hat{\boldsymbol{\mathrm{\upxi }}}\big) } \rvert _{t=t_{f}} . $$
(31)

Please note that the matrix \(\boldsymbol{\mathrm{H}}^{T} \boldsymbol{\mathrm{H}}\) is invertible in non-singular configurations. Let us also pinpoint that the RHS of Eq. (30b) depends on absolute adjoint coordinates, i.e. \(\boldsymbol{\mathrm{r}}_{J} = \boldsymbol{\mathrm{r}}_{J}( \boldsymbol{\mathrm{q}}, \boldsymbol{\mathrm{v}}, \dot{\boldsymbol{\mathrm{v}}}, \boldsymbol{\mathrm{\upsigma }}, \boldsymbol{\mathrm{\uplambda }}, \boldsymbol{\mathrm{\upeta }}, \boldsymbol{\mathrm{\upxi }})\); however, the components that appear underneath are straightforward and relatively easy to formulate algorithmically [28]. The expressions that depend on the variables \(\boldsymbol{\mathrm{q}}\), \(\boldsymbol{\mathrm{v}}\), \(\dot{\boldsymbol{\mathrm{v}}}\), \(\boldsymbol{\mathrm{\upsigma }}\), \(\boldsymbol{\mathrm{\uplambda }}\) constitute a set of parameters of the quantity \(\boldsymbol{\mathrm{r}}_{J}\) and Eq. (30b) since their values have already been established in the forward sweep. Moreover, a system of equations (30a), (30b) constitutes a set of ODEs since the constraints are enforced explicitly via Eq. (29), and no additional algebraic constraints must be taken into account (at least for open-loop multibody systems). Once the adjoint variables \(\hat{\boldsymbol{\mathrm{\upeta }}}\), \(\hat{\boldsymbol{\mathrm{\upxi }}}\) are evaluated for the entire time domain, their absolute-coordinate counterparts can be calculated via Eqs. (27a), (27b). Ultimately, they can be utilized to efficiently and reliably compute the gradient of a performance measure (12).

Table 2 Analogies between different algebraic constraints imposed on multibody (\(\boldsymbol{\mathrm{\Upphi }}= \boldsymbol{\mathrm{0}}\)) and the adjoint system (=0)

Let us summarize the contents of this section. The forward solution of the equations of motion yields state variables and reaction loads recorded at discrete time intervals of the temporal domain. These quantities are needed to compute the coefficients (i.e., appropriate Jacobi matrices) necessary for the adjoint sensitivity analysis. The results of almost all calculations performed in this process are to be reused for the adjoint gradient calculation (or calculated on-the-fly). Subsequently, by invoking the necessary conditions for the extremum of the performance measure (12), we came up with a set of DAEs, supplied with the relevant boundary conditions, which must be solved backward in time from the time instant \(t=t_{f}\) to the time instant \(t=0\). Simultaneously, we define the dependencies between absolute and joint–space Hamilton’s equations (25). These relationships allow us to derive the relations between joint–space and absolute adjoint variables (27a), (27b). Ultimately, by combining expressions (27a), (27b) with the projection methods, it was possible to develop a set of ordinary differential equations for the adjoint system formulated in terms of a set of independent adjoint variables, i.e., equations (30a), (30b). Ultimately, solving Eqs. (30a), (30b) for the adjoint multipliers allows one for the evaluation of the cost functional gradient (12). A possible approach for computation of the gradient at each step of the optimization procedure is summarized in Algorithm 1.

Algorithm 1
figure 2

Adjoint-based algorithm for gradient evaluation

5 Numerical examples

This section presents the application results that demonstrate the validity of the methodology as well as underlying challenges encountered. We investigate the problem of open-loop control signal synthesis for a double pendulum on a cart presented in Fig. 2. The gravity acts along the negative \(y\) axis of the global frame, and a viscous friction loads are introduced at each joint in the system. The mechanism is controlled by a horizontal force applied to the cart. The vector of design parameters \(\boldsymbol{\mathrm{b}}\) consists of discretized input signals \(u(t)\), where \(\Delta t = 0.005\) s is the discretization step. Consequently, \(\boldsymbol{\mathrm{b}}= [b_{0}, b_{1}, \ldots , b_{N}]\), where \(b_{0}=u(0)\), \(b_{1} = u(\Delta t)\), \(b_{N} = u(N \Delta t) = u(t_{f})\) and \(N = \frac{t_{f}}{\Delta t}\). The input signal \(u(t)\) can be easily recreated from \(\boldsymbol{\mathrm{b}}\). When the integration procedure requests an intermediate value, a spline interpolation is performed to approximate it. The parameters for the system, such as masses, moments of inertia, lengths, etc., are gathered in Table 3. Although the analyzed multibody system moves in plane, it has been modeled with a spatial approach following the methodology presented in the previous sections.

Fig. 2
figure 3

Double pendulum attached to a cart and forces acting on the MBS

Table 3 Model parameters for the OC problem with cart and pendula

5.1 Swing–up maneuver

Initial conditions for joint angles are specified to be \(\alpha _{1}=-\frac{\pi}{2}\), \(\alpha _{2}=0\) and are translated into the system stable equilibrium. The goal of this task is to stabilize the open-loop chain in the upright configuration. Mathematically, this can be described in the following way:

$$ J = \frac{1}{2} \left (\alpha _{1} - \frac{\pi}{2} \right )^{2} + \frac{1}{2} \alpha _{2}^{2} + \frac{\gamma}{2} \Big( \dot{\alpha}^{2}_{1} + \dot{\alpha}^{2}_{2} \Big) \quad \text{for} \enskip t=t_{f} , $$
(32)

where \(\gamma \) is a binary value enforcing that pendula will stop moving at the final time if \(\gamma =1\). The initial guess is simply no actuation (\(u_{0}(t) = 0\)) exerted on the cart. The gradient is calculated by solving Eqs. (30a), (30b) and conveyed to the optimization algorithm. The employed procedure utilizes \(\mathtt{ode45}\) Matlab function to integrate forward dynamics and the adjoint system, and \(\mathtt{fminunc}\) to compute the desired control input. Figure 3 presents the optimization convergence rates, specifically, how the cost and first–order optimality measure progressed with respect to iteration count in the case when \(\gamma =0\). The first 30 iterations were performed with the steepest descent method, which uniformly reduced the cost to a near-zero value. Subsequently, we employed the quasi-Newton approach with the BFGS method for updating the Hessian for better convergence. In the following step, the optimization has been re-run with the last result used as the initial guess for the case \(\gamma =1\) in Eq. (32). The progress of this computation is presented in Fig. 4, while the results of both processes are displayed in Fig. 5. The plotted lines represent the absolute angle (\(\upvarphi _{1}=\upalpha _{1}\), \(\upvarphi _{2}=\upalpha _{1}+\upalpha _{2}\)) of both pendula at the final iteration for two cases: \(\gamma =\{1,0\}\). Let us point out that the solid lines enter the final value with a slope, which is zero, suggesting that the final velocity is equal to zero.

Fig. 3
figure 4

Cost function and first-order optimality measure of the swing-up maneuver optimization when \(\gamma =0\)

Fig. 4
figure 5

Cost function and first-order optimality measure of the swing-up maneuver optimization when \(\gamma =1\)

Fig. 5
figure 6

Absolute orientation angles of links

In summary, overall procedure consists of the following steps:

  1. 1.

    Set \(\gamma =0\) and use fminunc with the steepest descent method to significantly reduce the value of the performance measure (first 30 iterations in Fig. 3)

  2. 2.

    While \(\gamma =0\), use fminunc with the BFGS method for updating the Hessian and converge to more precise solution (iterations 31 and the following in Fig. 3; the result is denoted by dashed lines in Fig. 5).

  3. 3.

    Set \(\gamma =1\) and use fminunc with the BFGS method with initial guess from the former optimization (Fig. 4). The solution is recorded in Fig. 5 as solid lines.

The gradient of the objective function computed at the first iteration from ODE (30a), (30b) has been compared with its DAE counterpart (cf. Eqs. (22a), (22b)). The absolute adjoint variables can be recreated from the joint-space using a set of equations (27a), (27b). Figure 6 shows that these quantities have almost identical values. Moreover, we append a plot with the results taken from the complex step method [35], which additionally shows a good agreement. Lastly, Fig. 7 displays constraint violation errors at different levels: velocity-level  (28), its first derivative ˙ (21c), as well as ¨. The plots refer to the ODE (cf. Eqs. (30a), (30b)) and DAE (Eqs. (22a), (22b)) formulations, respectively. The latter plot reveals that the constraint equations ¨=0 is fulfilled up to the machine accuracy since it is enforced explicitly in Eq. (22b). On the other hand, lower-level constraints are characterized by larger constraint violation errors, when compared against the ODE formulation. The DAE formulation exhibits the drift phenomenon due to a lack of numerical stabilization. Since the adjoint equations are solved backward in time, the most accurate results of the DAE formulation can be found at \(t_{f}=3\) s, whereas the errors are accumulated as the numerical procedure approaches the initial time.

Fig. 6
figure 7

The results of different gradient evaluation methods at the initial step (\(u(t) = 0\))

Fig. 7
figure 8

Constraint violation errors for adjoint–based conditions at \(\boldsymbol{\mathrm{u}}_{0}\). Comparison between joint space (30a), (30b) and absolute (22a), (22b) formulations

5.2 Stabilization of the cart

This test case investigates the calculation of the input control signal that will stabilize the cart while the pendula are free-falling under the gravitational force. The initial configuration of the MBS can be described in the following way: \(\alpha _{0} = 0\), \(\alpha _{1}=\frac{\pi}{4}\), \(\alpha _{2}=0\), while the bodies’ velocities are equal to zero. The optimal control approach to solving this problem requires specifying a performance measure, which can be defined in the following way:

$$ J = \frac{1}{2} \int _{0}^{t_{f}} x^{2} \, \mathrm{d}t+ {\frac{1}{2} x^{2}} \rvert _{t_{f}}, $$
(33)

where \(x := \alpha _{0}\) denotes an \(x\)–coordinate of the cart. On the other hand, it is possible to abandon the OC approach by adding a driving constraint equation of the form \(x=0\), solving only the forward dynamics problem, and recording the Lagrange multiplier \(\uplambda _{x=0}\) associated with the appropriate reaction force. Let us note that the latter approach can be applied to a narrow class of problems and, e.g., it would not work in the case of Sect. 5.1. Nevertheless, the specified approach yields a very good approximation of the solution and can be utilized to compare and verify the outcome. Figure 8 depicts the input control forces calculated using the approach proposed in this paper and taking into account the driving constraint force. Consequently, Fig. 9 presents how the position of the cart changes in time for different actuation signals. The response of the cart for the optimized input signal diverges \(\pm 7\) mm from the equilibrium, which is a very good fit.

Fig. 8
figure 9

Control signal for different test cases

Fig. 9
figure 10

Motion of the cart for different control signals

The starting guess for the optimization is again set to zero, i.e. \(u_{0}(t)=0\). Figure 10 shows the convergence rate of the \(\mathtt{fminunc}\) Matlab function, which executes quasi-Newton algorithm with a user-supplied gradient calculated from equations (30a), (30b). Figure 11 displays the gradients computed at the initial guess of the optimization and evaluated for the solution found from utilizing a driving constraint. One can see a high agreement between the methods employed. Moreover, the value of the gradient evaluated at the solution is expected to be equal to zero. This is not precisely the case in Fig. 11; however, the magnitude of the computed gradient is significantly lower when compared with the gradient evaluated at the initial guess.

Fig. 10
figure 11

Cost function and first-order optimality measure for stabilization of the cart in rest

Fig. 11
figure 12

The results of different gradient evaluation methods at the first step and for the exact solution

6 Discussion

The scope of this paper is relatively broad, touching different formulations of the EOM, projection methods, and adjoint sensitivity analysis. Its main contribution lies in the novel formulation of the adjoint method based on independent adjoint variables while avoiding computing complex state derivatives, which is a necessity when standard formulation of the adjoint method is combined with the joint–space equations of motion.

In this paper, absolute- (6a), (6b) and relative-coordinate (10a), (10b) formulations of Hamilton’s equations of motion are interchangeably used. By deriving the joint–space approach directly from its absolute–coordinate counterpart, the underlying dependencies between both formulations are grasped and succinctly captured by Eq. (25). These observations are further utilized to identify a set of independent adjoint variables shown in Eq. (26). The interrelations between joint–space and redundant sets of co-states are explicitly discovered in Eqs. (27a), (27b). Specifically, equation (27a) reveals a rather surprising analogy between the joint velocity \(\boldsymbol{\mathrm{\upbeta }}\) and the adjoint variable \(\hat{\boldsymbol{\mathrm{\upxi }}}\). Ultimately, it has been possible to formulate the adjoint system as a set of ODEs represented by a minimal set of variables by combining the derived relations with the projection methods and applying them to the absolute-coordinate adjoint system (22a), (22b). The resultant adjoint system (30a), (30b) can be solved backward in time with the aid of a standard ODE integration routine to efficiently evaluate the gradient of the performance measure.

Formulation (30a), (30b) possesses many advantages over the system (22a), (22b), which are directly inherited from the joint-space formulation (10a), (10b). The most obvious is that ODEs are generally easier to solve than DAEs and give more stable solutions during the numerical integration (cf. Fig. 7). This virtue is valid in the case of open-loop kinematic chains. The analysis of closed-loop systems requires additional efforts to include loop-closure constraints. Nevertheless, the number of algebraic constraints in joint-space framework would be significantly lower compared to the redundant setting. The scope of this paper is limited to tree-like topologies. Consequently, the number of variables that must be integrated numerically is proportional to the number of degrees of freedom, rather than the total number of redundant coordinates, which, in many practical applications, may provide a significant improvement over the absolute–coordinate formulation.

Furthermore, the coefficients and the right-hand side (\(\boldsymbol{\mathrm{r}}_{A}\), \(\boldsymbol{\mathrm{r}}_{B}\), \(\boldsymbol{\mathrm{r}}_{J}\)) of the adjoint system involve state-dependent jacobians of the EOM with respect to state variables. These matrices have a regular and relatively simple structure in the case of Eq. (22b), where a vast majority of expressions are readily available for implementation in closed form [28, 29]. Additional details can be also found in ref. [30] (specifically in equations (17),(27), and (31)). Equation (30b) benefits from this property since its RHS is a projection of the absolute coordinate adjoint system (22a), (22b) onto the motion subspace \(\boldsymbol{\mathrm{H}}\). Figure 12 presents various pathways one can take to generate the adjoint system. For instance, path \(C\) is advertised in ref. [30], whereas path \(C\)\(D\) is proposed in this paper. Conversely, one can derive the adjoint system by applying variational principles to the original joint-space EOM (10a), (10b) (path \(A\)\(B\) in Fig. 12). It can be demonstrated that such an approach produces the same set of equations as shown in Eqs. (30a), (30b), however, with the other right-hand side, i.e. \(\boldsymbol{\mathrm{r}}_{J} = \boldsymbol{\mathrm{r}}_{J}( \boldsymbol{\mathrm{\upalpha }}, \dot{\boldsymbol{\mathrm{\upalpha }}}, \ddot{\boldsymbol{\mathrm{\upalpha }}})\) [36]. Although the formulation does not involve additional Lagrange multipliers (e.g., \(\boldsymbol{\mathrm{\upsigma }}\), \(\boldsymbol{\mathrm{\uplambda }}\)), the partial derivatives required to evaluate the RHS become significantly more complex and cumbersome to compute when compared with the RHS of Eq. (30b). For example, the derivative \((\hat{\boldsymbol{\mathrm{M}}} \boldsymbol{\mathrm{\upbeta }})_{ \boldsymbol{\mathrm{\upalpha }}}\) (where \(\widehat{\boldsymbol{\mathrm{M}}} = \boldsymbol{\mathrm{H}}^{T} \boldsymbol{\mathrm{M}} \boldsymbol{\mathrm{H}}\)) is much more involved than its absolute-coordinate counterpart \((\boldsymbol{\mathrm{M}} \boldsymbol{\mathrm{v}})_{\boldsymbol{\mathrm{q}}}\), especially when one aims at implementing a systematic procedure.

Fig. 12
figure 13

Possible pathways to generate the adjoint system (30a), (30b). The scope of this paper covers path \(C-D\) (Sect. 3.2 and 4), while ref. [30] investigates path \(C\). Additionally, Sect. 2.2 presents path \(A\)

The proposed approach is amenable to extensions that consider closed kinematic chains. The idea is based on the approach presented in ref. [33], where a subset of joint variables \(\boldsymbol{\mathrm{\upbeta }}^{*} \in \mathcal {R}^{\mathit{dof}}\) is picked out of a dependent set \(\boldsymbol{\mathrm{\upbeta }}\). The relation between \(\boldsymbol{\mathrm{\upbeta }}\) and \(\boldsymbol{\mathrm{\upbeta }}^{*}\) is similar to that expressed in Eq. (7), i.e. \(\boldsymbol{\mathrm{\upbeta }}= \boldsymbol{\mathrm{E}} \boldsymbol{\mathrm{\upbeta }}^{*}\), which is a conversion of the velocity equation \(\boldsymbol{\mathrm{\Upgamma }}\boldsymbol{\mathrm{\upbeta }}= \boldsymbol{\mathrm{0}}\). The term \(\boldsymbol{\mathrm{E}} \in \mathcal {R}^{n_{\beta} \times \mathit{dof}}\) is the equivalent of the subspace \(\boldsymbol{\mathrm{H}}\) that is a null-space of \(\boldsymbol{\mathrm{\upbeta }}^{*}\). Therefore, by projecting equations (11a), (11b) onto the subspace \(\boldsymbol{\mathrm{E}}\), we come up with the system equivalent to the one described by Eqs. (10a), (10b). Subsequently, the relation (25), as well as all the following derivations, remains valid for \(\boldsymbol{\mathrm{H}}\leftarrow \boldsymbol{\mathrm{H}}^{*} = \boldsymbol{\mathrm{H}}\boldsymbol{\mathrm{E}}\). Specifically, the minimal–coordinate adjoint variables \(\hat{\boldsymbol{\mathrm{\upeta }}}^{*}, \hat{\boldsymbol{\mathrm{\upxi }}}^{*} \in \mathcal {R}^{\mathit{dof}}\) would map onto the absolute–coordinate counterparts \(\boldsymbol{\mathrm{\upeta }}\), \(\boldsymbol{\mathrm{\upxi }}\) via (27a), (27b) if we substitute \(\boldsymbol{\mathrm{H}}^{*}\) and \(\dot{\boldsymbol{\mathrm{H}}}^{*}\) instead of \(\boldsymbol{\mathrm{H}}\), \(\dot{\boldsymbol{\mathrm{H}}}\). The resultant adjoint system would look similar to that shown in Eqs. (30a), (30b). These areas are of current endeavors for the authors.

As a final comment, the discovery of the relations demonstrated in (27a), (27b), combined with the analogies between state and adjoint variables gathered in Table 1, may pave the way to set up a fully recursive formulation for solving the adjoint system. Due to the fact that the leading matrix in the forward (Eq. (6a)) and adjoint (Eq. (22b)) problem is the same, it is expected that the Hamiltonian-based divide-and-conquer parallel algorithm [9] can be applied here to make the design sensitivity analysis performant, especially for multi-degree-of-freedom systems.

7 Summary and conclusions

A novel adjoint-based method that exploits a set of independent co-state variables has been developed, verified, and compared to the existing state-of-the-art formulations (redundant coordinate adjoint method, complex-step method). The joint-coordinate adjoint method is derived as an extension of the efforts presented in [30]. Parallels that exist between the formulation of Hamilton’s equations of motion in mixed redundant–joint set of coordinates and the necessary conditions arising from the minimization of the cost functional are introduced in the text. The proposed unified treatment allows one for reformulation of the adjoint DAEs into a set of first-order ordinary differential equations by reusing a joint-space inertia matrix calculated in the forward dynamics problem. The approach is relatively simple to implement since the core partial derivatives are calculated with respect to the redundant set of coordinates and projected back onto the appropriate joint’s motion and constraint–force subspaces. The validity and properties of the approach are demonstrated based on the optimal control of a double pendulum on a cart. Through numerical studies, the performance of the proposed approach has been quantified, especially in terms of constraint violation errors resulting from backward integration of the adjoint system. Applications show that the joint–coordinate adjoint method is stable, and the accuracy of computations is higher as compared against redundant-coordinate counterpart. Although the method has been tested only in two scenarios, the proposed approach is applicable to optimal control of multibody systems larger than those presented in the text within longer time horizons.