3.1 Introduction

Increasing complex space missions require launch vehicles to be with greater load-carrying capacity, better orbit injection accuracy and higher reliability. Such demands also cause the increased complexity of the vehicle, leading to a higher probability of fault, especially for the propulsion system. To remedy this issue, an advanced and robust ascent guidance capable of fault-tolerant is critical for the success of mission. Iterative guidance method [1] (IGM) and powered explicit guidance [2] (PEG) are two commonly used methods for the ascent phase of launch vehicles. These two guidance methods work well in the nominal condition and can adapt to many off-nominal conditions [3]. However, they lack of strong adaptive capacity, which cannot guarantee the reliability when the dynamic model or parameters change significantly. Alternatively, numerical approaches based on the optimal control theory may be the better choice. The existing algorithms can be divided into direct methods and indirect methods. Using the indirect methods, the guidance problem is transformed into Hamilton two-point boundary value problems [4] (TPBVP), but the solving process of this Hamilton two-point boundary value problem is complicated and highly sensitive to the initial guess. Using the direct method, the guidance problem is transformed into a nonlinear programming problem [5] (NLP). However, solving such problem is extremely computational intensive, which is difficult to meet the real-time requirement for online application.

In recent years, computational guidance has been proposed, which allows for onboard to generate guidance commands by the rapid trajectory optimization or other numerical computation [6]. Convex optimization method is the most popular one in this field [7]. The advantage of convex optimization is that the convex problem can be reliably solved by the interior point method to gain global optimality in polynomial time, regardless of the initial guess. Owing to the high efficiency, the convex optimization methods have been applied into various guidance designs, such as entry guidance [8], landing guidance [9], as well as ascent guidance [10]. A Newton-Kantorovich pseudospectral convex programming method was presented in [10] to solve the ascent trajectory planning problem. The combination of Newton-Kantorovich and pseudospectral discrete furtherly improves the computational efficiency. Similarly, Li et al. [11] presented a pseudospectral based convex optimization approach to solve the ascent guidance problem in the presence of thrust drop fault. Song et al. [12] investigated online joint optimization of the target orbit and flight trajectory for launch vehicles under a propulsion system fault. Most recently, the optimal abort orbit problem is studied in [13], which employed the SOCP approach to solve the circular abort orbit with the maximum of the radius. Through these literature, the computational guidance and online programming methods for ascent problems have been studied extensively. However, most of them are still limited in terms of the computational efficiency.

Another promising approach for online application is model predictive static programming (MPSP) [14]. Owing to featuring an explicit closed-form solution and avoiding numerical complexities of optimal control theory, this method exhibits a higher computational efficiency over convex optimization methods. In our previous work, a new developed generalized quasi-spectral MPSP (GS-MPSP) has been proposed [15], which furtherly improves the computational speed by using spectral discretization and collocation method. This new algorithm also offers the advantage of smooth control and better usability, and hence holds great promise for online application.

In this chapter, based on the GS-MPSP philosophy, an ascent guidance for the thrust drop fault of the launch vehicle is presented. For the ascent guidance problem, the thrust drop fault may lead to the flight time substantially changing compared to the nominal trajectory, and a proper final time or orbit injection point is not easy to give out. In this case, the ability to search the final time in a large range is required for the guidance algorithm. The original GS-MPSP [15] is able to address the free terminal time problem by formulating a sensitive relation between the final time and final outputs, and taking the final time as the additional variable to be determined. However, this operation is quite rough and feasible just for slightly adjusting the terminal time when a proper initial guess is provided. Therefore, an improved GS-MPSP (IGS-MPSP) method is first proposed, which furtherly enhances the convergence robustness for the free final time in the presence of the poor initial guess by introducing a scale factor of time interval. Then, the ascent guidance is systematically formulated using this improved algorithm. Consequently, a numerical simulation for the case of injecting into a circular orbit is conducted to verify the effectiveness of the proposed method. The comparison with the SOCP based method is also carried out.

3.2 Generic Theory of the IGS-MPSP Method

The GS-MPSP method is proposed for efficiently solving a class of nonlinear terminal constraint problem. The considered nonlinear system dynamics is as follows:

$$\begin{aligned} \dot{\boldsymbol{X}}(t) = \boldsymbol{f}(\boldsymbol{X},\boldsymbol{U},t),t \in [t_0,t_f] \end{aligned}$$
(3.1)
$$\begin{aligned} \boldsymbol{Y}(t) = \boldsymbol{h}(\boldsymbol{X},t) \end{aligned}$$
(3.2)

where \(\boldsymbol{X} \in {\boldsymbol{R}^n}\), \(\boldsymbol{U} \in {\boldsymbol{R}^m}\) and \(\boldsymbol{Y} \in {\boldsymbol{R}^p}\) denote the state, control and output vectors, respectively. The purpose of this approach is to find suitable control history \(\boldsymbol{U}(t)\) to ensure that the final system output \(\boldsymbol{Y_f}({t_f})\) approaches the desired value \({\boldsymbol{Y}_d}\) with minimum control effort.

3.2.1 The Sensitivity Relation for Free-Terminal Time Continuous System

In this subsection, a sensitivity relation for the continuous system with the free -terminal time is derived. It is based on the sensitivity relation developed in Ref. [15, 16]. The brief introduction is presented in the following.

In the proposed method, it is considered that the terminal time is adjusted by uniformly scaling the length of time interval \([t_0,t_f]\). Thus, the updated terminal time can be written as:

$$\begin{aligned} t_f^{l+1} = t_f^l+\Delta \kappa \cdot (t_f^l-t_0) \end{aligned}$$
(3.3)

where \(\Delta \kappa \in R\) is the scale factor, and the superscript l and \(l+1\) denotes the current step and the update step. The differential of Eq. (3.3) is given by

$$\begin{aligned} dt^{l+1} = dt^l+\Delta \kappa \cdot dt^l,t\in [t_0,t_f^l] \end{aligned}$$
(3.4)

Note that in Eq. (3.4), the term \(\Delta \kappa \cdot dt^l\) denotes the changed time length for each infinitesimal time interval, \(dt,t\in [t_0,t_f^l]\). Next, the final output vector of the system (3.1) can be expressed as follow by introducing a weighting matrix \(\boldsymbol{W}(t)\in \boldsymbol{R}^{p\times n}\)

$$\begin{aligned} \boldsymbol{Y} (\boldsymbol{X}(t_f)) = \boldsymbol{Y} (\boldsymbol{X}(t_f)) + \int \limits _{{t_0}}^{{t_f}} {\left[ {\boldsymbol{W}(t) \cdot (\boldsymbol{f}(\boldsymbol{X},\boldsymbol{U},t)-\dot{X})} \right] } \ dt \end{aligned}$$
(3.5)

We then differentiate both sides of Eq. (3.5), it gives

$$\begin{aligned} \begin{aligned} d\boldsymbol{Y}(\boldsymbol{X}({t_f})) = \frac{{\partial \boldsymbol{Y}(\boldsymbol{X}(t))}}{{\partial \boldsymbol{X}(t)}} \cdot d\boldsymbol{X}({t_f}) + \int \limits _{{t_0}}^{{t_f}} \left[ \boldsymbol{W}(t) \cdot \frac{{\partial \boldsymbol{f}(\boldsymbol{X},\boldsymbol{U},t)}}{{\partial \boldsymbol{X}(t)}} \cdot \delta \boldsymbol{X}(t)\right. \\ \left. + \boldsymbol{W}(t) \cdot \frac{{\partial \boldsymbol{f}(\boldsymbol{X},\boldsymbol{U},t)}}{{\partial \boldsymbol{U}(t)}} \cdot \delta \boldsymbol{U}(t) - \boldsymbol{W}(t) \cdot d \boldsymbol{\dot{X}}(t) \right] dt \end{aligned} \end{aligned}$$
(3.6)

Note that in Eq. (3.6), \(d\boldsymbol{X}(t)\) denotes the differential of \(\boldsymbol{X}(t)\) taking into account the differential change of time interval, \(\Delta \kappa \cdot dt\), and \(\delta \boldsymbol{X}\) denotes the variation in \(\boldsymbol{X}\) when the time interval is assumed to be fixed. They have the relationship as follow

$$\begin{aligned} d\boldsymbol{X}(t) = \delta \boldsymbol{X}(t) + \boldsymbol{\dot{X}}(t) \cdot \Delta \kappa \cdot dt \end{aligned}$$
(3.7)

We conduct the first order differential of Eq.(3.7) respect to the time, it yields

$$\begin{aligned} d\boldsymbol{\dot{X}}(t) = \delta \boldsymbol{\dot{X}}(t) + \boldsymbol{\dot{X}}(t) \cdot \Delta \kappa \end{aligned}$$
(3.8)

Substituting the \(d\boldsymbol{\dot{X}}(t)\) for Eq. (3.8) into the term of the integrand on right side of Eq. (3.6) leads to

$$\begin{aligned} \begin{aligned} d\boldsymbol{Y}(\boldsymbol{X}({t_f})) = \frac{{\partial \boldsymbol{Y}(\boldsymbol{X}(t))}}{{\partial \boldsymbol{X}(t)}} \cdot d\boldsymbol{X}({t_f}) + \int \limits _{{t_0}}^{{t_f}} \left[ \boldsymbol{W}(t) \cdot \frac{{\partial \boldsymbol{f}(\boldsymbol{X},\boldsymbol{U},t)}}{{\partial \boldsymbol{X}(t)}} \cdot \delta \boldsymbol{X}(t)\right. \\ \left. + \boldsymbol{W}(t) \cdot \frac{{\partial \boldsymbol{f}(\boldsymbol{X},\boldsymbol{U},t)}}{{\partial \boldsymbol{U}(t)}} \cdot \delta \boldsymbol{U}(t) - \boldsymbol{W}(t) \cdot \boldsymbol{\dot{X}}(t) \cdot \Delta \kappa - \boldsymbol{W}(t) \cdot \delta \boldsymbol{\dot{X}}(t) \right] \ dt \end{aligned} \end{aligned}$$
(3.9)

By integrating the last term on the right side of Eq. (3.6), we obtain

$$\begin{aligned} \begin{aligned} d\boldsymbol{Y}(\boldsymbol{X}({t_f}))&= \frac{{\partial \boldsymbol{Y}(\boldsymbol{X}({t_f}))}}{{\partial \boldsymbol{X}({t_f})}} \cdot \delta \boldsymbol{X}({t_f}) - {\left[ {\boldsymbol{W}(t) \cdot \delta \boldsymbol{X}(t)} \right] _{t = {t_f}}} + {\left[ {\boldsymbol{W}(t) \cdot \delta \boldsymbol{X}(t)} \right] _{t = {t_0}}} \\&+ \int \limits _{{t_0}}^{{t_f}} \left[ {\boldsymbol{W}(t) \cdot \frac{{\partial \boldsymbol{f}(\boldsymbol{X},\boldsymbol{U},t)}}{{\partial \boldsymbol{X}(t)}} \cdot \delta \boldsymbol{X}(t) + \boldsymbol{\dot{W}}(t) \cdot \delta \boldsymbol{X}(t)} \right. \\&\left. {+ \boldsymbol{W}(t) \cdot \frac{{\partial \boldsymbol{f}(\boldsymbol{X},\boldsymbol{U},t)}}{{\partial \boldsymbol{U}(t)}} \cdot \delta \boldsymbol{U}(t) - \boldsymbol{W}(t) \cdot \boldsymbol{\dot{X}}(t) \cdot \Delta \kappa } \right] \ dt \\ \end{aligned} \end{aligned}$$
(3.10)

Equation (3.10) can be further rearranged as

$$\begin{aligned} \begin{aligned} d\boldsymbol{Y}(\boldsymbol{X}({t_f})) = \left( {\frac{{\partial \boldsymbol{Y}(\boldsymbol{X}({t_f}))}}{{\partial \boldsymbol{X}({t_f})}} - {{\left[ {\boldsymbol{W}(t)} \right] }_{t = {t_f}}}} \right) \cdot \delta \boldsymbol{X}({t_f}) + {\left[ {\boldsymbol{W}(t) \cdot \delta \boldsymbol{X}(t)} \right] _{t = {t_0}}}\\ + \int \limits _{{t_0}}^{{t_f}} \left[ \left( {\boldsymbol{W}(t) \cdot \frac{{\partial \boldsymbol{f}(\boldsymbol{X},\boldsymbol{U},t)}}{{\partial \boldsymbol{X}(t)}}\boldsymbol{ + \dot{W}}(t)} \right) \cdot \delta \boldsymbol{X}(t) + \boldsymbol{W}(t) \cdot \frac{{\partial \boldsymbol{f}(\boldsymbol{X},\boldsymbol{U},t)}}{{\partial \boldsymbol{U}(t)}} \cdot \delta \boldsymbol{U}(t) \right. \\ \left. - \boldsymbol{W}(t) \cdot \boldsymbol{\dot{X}}(t) \cdot \Delta \kappa \right] \ dt\\ \end{aligned} \end{aligned}$$
(3.11)

Here, it is necessary to choose \(\boldsymbol{W}(t)\) so that eliminates the coefficients of \(\delta \boldsymbol{X}(t)\) and \(\delta \boldsymbol{X}(t_f)\) in Eq. (3.11), which leads to the following weighting matrix dynamics with the associated boundary condition at the final time \(t_f\):

$$\begin{aligned}&\boldsymbol{\dot{W}}(t) = - \boldsymbol{W}(t) \cdot \frac{{\partial \boldsymbol{f}(\boldsymbol{X},\boldsymbol{U},t)}}{{\partial \boldsymbol{X}(t)}}\end{aligned}$$
(3.12)
$$\begin{aligned}&\qquad \boldsymbol{W}({t_f}) = \frac{{\partial \boldsymbol{Y}(\boldsymbol{X}({t_f}))}}{{\partial \boldsymbol{X}({t_f})}} \end{aligned}$$
(3.13)

In Eq. (3.11), it is straight to obtain \(\delta \boldsymbol{X}({t_0}) = 0\) with the fact that the initial conditions are specified. Then use this observation and the weighting matrix dynamic as presented in Eqs. (3.12) and (3.13), the Eq. (3.11) can be further simplified as

$$\begin{aligned} \begin{aligned} d\boldsymbol{Y}(\boldsymbol{X}({t_f}))&= \int \limits _{{t_0}}^{{t_f}} {\left[ {\boldsymbol{W}(t) \cdot \frac{{\partial \boldsymbol{f}(\boldsymbol{X},\boldsymbol{U},t)}}{{\partial \boldsymbol{U}(t)}} \cdot \delta \boldsymbol{U}(t)} \right] } \ dt + \int \limits _{{t_0}}^{{t_f}} {\left[ { - \boldsymbol{W}(t) \cdot \boldsymbol{\dot{X}}(t) \cdot \Delta \kappa } \right] } \ dt \\&= \int \limits _{{t_0}}^{{t_f}} {\left[ {{\boldsymbol{B}_s}(t) \cdot \delta \boldsymbol{U}(t)} \right] } \ dt + {\boldsymbol{B}_\kappa } \cdot \Delta \kappa \end{aligned} \end{aligned}$$
(3.14)

where

$$\begin{aligned} {\boldsymbol{B}_s}(t) \triangleq \boldsymbol{W}(t) \cdot \frac{{\partial \boldsymbol{f}(\boldsymbol{X},\boldsymbol{U},t)}}{{\partial \boldsymbol{U}(t)}} \end{aligned}$$
(3.15)
$$\begin{aligned} {\boldsymbol{B}_\kappa } \triangleq - \int \limits _{{t_0}}^{{t_f}} {\left[ {\boldsymbol{W}(t) \cdot \boldsymbol{\dot{X}}(t)} \right] }dt \end{aligned}$$
(3.16)

where \(\boldsymbol{B}_s(t)\) is the sensitivity matrix that relates the error \(\delta \boldsymbol{U}\) to \(d\boldsymbol{Y}\) as per Ref. [16]. And \(\boldsymbol{B}_\kappa \) can be interpreted as the sensitivity matrix that relates the scale factor of time internal, \(\Delta \kappa \) to the final out error \(d\boldsymbol{Y}\). Note that in here, the sensitive relation for terminal time is indirectly formulated by the scale factor of time internal, \(\Delta \kappa \). Compared with the original way in GS-MPSP, this strategy is more accurate since it uniformly scales each infinitesimal time interval on \([t_0,t_f]\), rather than roughly adjust the final time.

3.2.2 The Mathematical of IGS-MPSP Method

In IGS-MPSP, the control vector is represented by a weighted summation of some basic spectral functions to reduce the number of various to be optimized:

$$\begin{aligned} \boldsymbol{U}(t) = \sum \limits _{i = 1}^{{N_p}} {{\boldsymbol{C}_i}} {P_i}(t) \end{aligned}$$
(3.17)

where \({\boldsymbol{C}_j}={[c_{1j},c_{2j},\ldots ,c_{mj}]}^T\) is the coefficient vector corresponding to the jth spectral function. \(N_p\) is the number of basic functions in the expression, and \(\boldsymbol{P}_j(t)\) is the basic spectral function. The spectral functions can be selected as Legendre series, Chebyshev series, etc.

Next, the new developed relationship as presented in Eq. (3.14) is applied to derive the GS-MPSP method for the free-time problem. Since the control history \(\boldsymbol{U}(t),t \in [{t_0},{t_f}]\) has been represented by the spectral functions in Eq. (3.17), and a new scale factor of time internal, \(\Delta \kappa \), is introduced to adjust the unspecified final time as in Eq. (3.3), both the coefficient vector \([\boldsymbol{C}_1,\boldsymbol{C}_2,\ldots ,\boldsymbol{C}_{N_p}]\) and scale factor \(\delta \kappa \) are selected as variable to be determined.

First, the variation of the control history during the time \(t=[{t_0},{t_f}]\) can be obtained from Eq. (3.17):

$$\begin{aligned} d\boldsymbol{U}(t) = \sum \limits _{j = 1}^{{N_p}} {d{\boldsymbol{C}_j}{P_j}(t)} \end{aligned}$$
(3.18)

Substituting Eq. (3.18) into Eq. (3.14), it yields

$$\begin{aligned} \begin{aligned} d{\boldsymbol{Y}_N}&= \int \limits _{{t_0}}^{{t_f}} {\left[ {{\boldsymbol{B}_s}(t) \cdot \sum \limits _{j = 1}^{{N_p}} {\delta {\boldsymbol{C}_j}{P_j}(t)} } \right] } \mathrm{{ }}dt + {\boldsymbol{B}_\kappa } \cdot \Delta \kappa \\&= \sum \limits _{j = 1}^{{N_p}} {{\boldsymbol{A}_j} \cdot d{\boldsymbol{C}_j}} + {\boldsymbol{B}_\kappa } \cdot \Delta \kappa \end{aligned} \end{aligned}$$
(3.19)

in which

$$\begin{aligned} {\boldsymbol{A}_j} = \int \limits _{{t_0}}^{{t_f}} {{\boldsymbol{B}_s}(t) \cdot {P_j}(t)dt} ,j = 1,2 \ldots N_p \end{aligned}$$
(3.20)

In Eq. (3.19), \(\boldsymbol{A}_j\) is the spectral sensitivity matrix as per Ref. [15], which relates the error of the coefficient of the jth spectral function, \(d\boldsymbol{C}_j\), to the error of the output \(d\boldsymbol{Y}_N\). Thus, a linear formula for the error of the final output and the error of each coefficient vector as well as the scale factor is obtained. Then the desired coefficients and scale factor can be worked out by formulating a static programming problem:

The update equation for coefficient vectors can be written as

$$\begin{aligned} \boldsymbol{C}_j^{l+1} = \boldsymbol{C}_j^l - d\boldsymbol{C}_j^l \end{aligned}$$
(3.21)

where \(\boldsymbol{C}_j^l\) denotes as the jth coefficient at the current iteration (represented by superscript l), and \(\boldsymbol{C}_j^{l+1}\) denotes the updated coefficient in the next iteration (represented by superscript \(l+1\)). After substituting the expression of \(d\boldsymbol{C}_j^l\) in Eq. (3.21) into Eq. (3.19), the error of final output can be written as

$$\begin{aligned} d{\boldsymbol{Y}_N} = \sum \limits _{j = 1}^{{N_p}} {{\boldsymbol{F}_j}\delta \boldsymbol{C}_j^l} = \sum \limits _{j = 1}^{{N_p}} {{F_j}(\boldsymbol{C}_j^l} - \boldsymbol{C}_j^{l\mathrm{{ + }}1}) = {\boldsymbol{c}_\lambda } - \sum \limits _{j = 1}^{{N_p}} {{\boldsymbol{A}_j}\boldsymbol{C}_j^{l + 1}} + {\boldsymbol{B}_\kappa } \cdot \Delta \kappa \end{aligned}$$
(3.22)

where

$$\begin{aligned} \boldsymbol{c}_\lambda = \sum \limits _{j = 1}^{{N_p}} {{\boldsymbol{A}_j}\boldsymbol{C}_j^l} \end{aligned}$$
(3.23)

Equation (3.22) is apparently a linear equations set about \(\boldsymbol{C}_j^{l+1}\) and \(\Delta \kappa \), which contains \(N_p\times m+1\) unknowns and p equations where \(N_p\times m+1>p\). Hence, Eq. (3.22) represents an under-constrained system and it is necessary to impose an appropriate performance index to facilitate a solution. In here, we consider to minimum the control effort, that is

$$\begin{aligned} J = {1 \over 2}\int \limits _{{t_{^0}}}^{t_f^l} {\left[ {{\boldsymbol{U}^{l + 1}}{{(t)}^T}\boldsymbol{R}(t){\boldsymbol{U}^{l + 1}}(t)} \right] } \cdot dt + {R_\kappa }\Delta {\kappa ^2} \end{aligned}$$
(3.24)

where \(\boldsymbol{R}(t)\) and \(R_\kappa \) are the weight matrix for control and scale factor. Substituting Eq. (3.17) into Eq. (3.24), it yields

$$\begin{aligned} J = {1 \over 2}\int \limits _{{t_{^0}}}^{t_f^l} {\left[ {{{\left( {\sum \limits _{i = 1}^{{N_p}} {\boldsymbol{C}_i^{l + 1}} {P_i}(t)} \right) }^T}\boldsymbol{R}(t)\left( {\sum \limits _{i = 1}^{{N_p}} {\boldsymbol{C}_i^{l + 1}} {P_i}(t)} \right) } \right] } \cdot dt + {R_\kappa }\Delta {\kappa ^2} \end{aligned}$$
(3.25)

Then, combined with the cost function (3.25) and the constraints given in Eq. (3.22), a static programming problem for the coefficient vector and scale vector can be formulated. The augmented cost function of this problem is given by

$$\begin{aligned} \bar{J} = J + {\boldsymbol{\lambda }^T}\Bigg (d{\boldsymbol{Y}_N} - {\boldsymbol{c}_\lambda } + \sum \limits _{j = 1}^{{N_p}} {{\boldsymbol{A}_j}\boldsymbol{C}_j^{l + 1}} - {\boldsymbol{B}_\kappa } \cdot \Delta \kappa \Bigg ) \end{aligned}$$
(3.26)

where \(\boldsymbol{\lambda } \in {\boldsymbol{R}^p}\) is Lagrange multiplier. The first-order optimality conditions are

$$\begin{aligned} \begin{aligned} {{\partial \bar{J}} \over {\partial d{\boldsymbol{C}_j}}} = \sum \limits _{i = 1}^{{N_p}} {{\boldsymbol{R}_{ij}} \cdot \boldsymbol{C}_j^{l + 1}} - {\boldsymbol{A}_j}^T \cdot \boldsymbol{\lambda } = \boldsymbol{0},&\\ j = 1,2 \cdots N_p&\end{aligned} \end{aligned}$$
(3.27)
$$\begin{aligned} {{\partial \bar{J}} \over {\partial \Delta \kappa }} = {R_\kappa } \cdot \Delta \kappa - {\boldsymbol{B}_\kappa }^T \cdot \boldsymbol{\lambda } = \boldsymbol{0} \end{aligned}$$
(3.28)

where

$$\begin{aligned} {\boldsymbol{R}_{ij}} = \int \limits _{{t_{^0}}}^{t_f^l} {\left[ {{P_i}(t)\boldsymbol{R}(t){P_j}(t)} \right] } \cdot dt \end{aligned}$$
(3.29)

Thus, Eqs. (3.22), (3.27) and (3.28) make up a equation set about \(\boldsymbol{C}_j^{l + 1}\), \(\boldsymbol{\lambda }\) and \(\delta \kappa \), which can be written as a compact form as follow

$$\begin{aligned} \boldsymbol{D} \cdot \boldsymbol{\tilde{X}} = \boldsymbol{E} \end{aligned}$$
(3.30)

where

$$\begin{aligned} \boldsymbol{D} = \left[ {\begin{matrix} {{\boldsymbol{R}_{11}}} &{} \cdots &{} {{\boldsymbol{R}_{1{N_p}}}} &{} { - {\boldsymbol{A}_1}^T} &{} 0 \\ \vdots &{} \ddots &{} \vdots &{} \vdots &{} 0\\ {{\boldsymbol{R}_{{N_p}1}}} &{} \cdots &{} {{\boldsymbol{R}_{{N_p}{N_p}}}} &{} {- {\boldsymbol{A}_{{N_p}}}^T} &{} 0\\ {{\boldsymbol{A}_1}} &{} \cdots &{} {{\boldsymbol{A}_{{N_p}}}} 0 &{} {- {\boldsymbol{B}_\kappa }}\\ 0 &{} \cdots &{} 0 &{} {- {\boldsymbol{B}_\kappa }^T} &{} {{R_\kappa }} \\ \end{matrix} } \right] ,\boldsymbol{\tilde{X}} = \left[ {\begin{matrix} {\begin{matrix} {\boldsymbol{C}_1^{l + 1}} \\ \vdots \\ {\boldsymbol{C}_{{N_p}}^{l + 1}} \\ \boldsymbol{\lambda } \\ \end{matrix} } \\ {\Delta \kappa } \\ \end{matrix} } \right] ,\boldsymbol{E} = \left[ {\begin{matrix} {\begin{matrix} {\boldsymbol{0}} \\ \vdots \\ {\boldsymbol{0}} \\ {{{\boldsymbol{c}}_\lambda } - d{{\boldsymbol{Y}}_N}} \\ \end{matrix} } \\ 0 \\ \end{matrix} } \right] \end{aligned}$$
(3.31)

The Eq. (3.30) contains \({N_p}\times m+p+1\) unknowns (i.e., \(\boldsymbol{C}_1^{l + 1},\boldsymbol{C}_2^{l + 1} \cdots \boldsymbol{C}_{{N_p}}^{l + 1},\boldsymbol{\lambda ,}\Delta \kappa \)) and the same number of equations. Assuming that the matrix \(\boldsymbol{D}\) is nonsingular, the unknown vector \(\boldsymbol{\tilde{X}}\) can be solved by

$$\begin{aligned} \boldsymbol{\tilde{X}} = {\boldsymbol{D}^{{{ - }}1}} \cdot \boldsymbol{E} \end{aligned}$$
(3.32)

Consequently, the updated coefficients \(\boldsymbol{C}_j^{l+1}\) and scale factor \(\Delta \kappa \) are obtained from the solution of \(\boldsymbol{\tilde{X}}\). Then substituting the \(\Delta \kappa \) into Eq. (3.3), the update terminal time can be obtained by

$$\begin{aligned} t_f^{l+1} = t_f^l+\Delta \kappa \cdot (t_f^l-{t_0}) \end{aligned}$$
(3.33)

And substitute \(\boldsymbol{C}_j^{l+1}(j=1,2 \cdots {N_p})\) into Eq. (3.17), the updated control history at time \(t\in [{t_0},t_f^{l + 1}]\) is eventually given by

$$\begin{aligned} {\boldsymbol{U}^{l + 1}}(t) = \sum \limits _{i = 1}^{{N_p}} {\boldsymbol{C}_i^{l + 1}} {P_i}(t),t \in [{t_0},t_f^{l + 1}] \end{aligned}$$
(3.34)

Remark 1

To implement the IGS-MPSP algorithm, the sensitivity matrix \(\boldsymbol{A}_j\), \(\boldsymbol{B}_\kappa \) and \(\boldsymbol{R}_{ij}\) are necessary to be worked out in each iteration. The Gauss Quadrature Collocation method can be applied to efficiently compute such matrix and ensure the computational efficiency of this approach. The detailed procedure will be presented in the next subsection.

Remark 2

Compared with the original algorithm [15], the improved method introduces a scale factor of time internal to adjust the terminal time, and accordingly a sensitive relation for this factor is derived. This way improves the accuracy of sensitive relation for terminal time and hence is able to search the final time in a wide range when a poor initial guess is provided. That is, the convergence robustness for initial guess of final time is improved.

3.2.3 The Computation of Sensitive Matrix by Gauss Quadrature Collocation

In this subsection, the Gauss Quadrature Collocation method is applied to efficiently compute the sensitive matrix \(\boldsymbol{A}_j\), \(\boldsymbol{B}_\kappa \) and \(\boldsymbol{R}_{ij}\). The detailed procedure is presented as follow.

For the convenience of solving, the physical time \(t\in [{t_0},{t_f}]\) is converted to the scale time \(\tau \in [-1,1]\) by the following relation:

$$\begin{aligned} t \equiv t(\tau ,{t_0},{t_f}) = \frac{{{t_f} - {t_0}}}{2}\tau + \frac{{{t_f} + {t_0}}}{2} \end{aligned}$$
(3.35)

Next, the collocation method is used to solve the weighting dynamic equation as presented in Eqs. (3.12) and (3.13). First, we rewrite the matrix equation (3.12) as the following vector equation with the independent variable \(\tau \):

$$\begin{aligned} \begin{aligned}&{{\boldsymbol{\dot{W}}}_k}(\tau ) = -{\boldsymbol{W}_k}(\tau ) \cdot {\boldsymbol{f}_x}(\tau ) \\&k = 1,2, \ldots ,p \\ \end{aligned} \end{aligned}$$
(3.36)

where \(\boldsymbol{W}_k(\tau )\) denotes the kth row vector of matrix \(\boldsymbol{W}(t)\), and \(\boldsymbol{f}_x(\tau )\) is defined by

$$\begin{aligned} {\boldsymbol{f}_x}(\tau ) \triangleq \frac{{\partial \boldsymbol{f}(\boldsymbol{X},\boldsymbol{U},t)}}{{\partial \boldsymbol{X}(t)}} \cdot \frac{{{t_f} - {t_0}}}{2} \end{aligned}$$
(3.37)

Then N Lagrange interpolating polynomials \({L_i}(\tau )(i = 1,2, \ldots ,N)\) are used to appropriate both sides of Eq. (3.36), by which the differential equation can be converted to a series of algeria equations at specified collocation points \({\tau _i}(i = 1,2 \cdots N)\). In here, the Gauss-Lobatto type collocation is used, such as Legendre-Gauss-Lobatto (LGL), or Chebyshev-Gauss-Lobatto (CGL) series. Note that, in principle \(\boldsymbol{W}_k(\tau )\) must satisfy Eq. (3.36) at all collocation points \({\tau _i}(i = 1,2, \ldots , N)\). However, \(\boldsymbol{W}_k\) is generally computed by integrating the matrix dynamics (3.36) backward from \(\tau _N\) to \(\tau _1\) since the value at the final time \(t_f(\tau _N)\) is known. This means \({\boldsymbol{W}_k}(\tau ){{{|}}_{\tau = {\tau _\mathrm{{1}}}}}\) is the last integral step as well as the integration result. Therefore, \({\boldsymbol{W}_k}(\tau ){{{|}}_{\tau = {\tau _\mathrm{{1}}}}}\) is not necessary to strictly satisfy the differential equation (3.36), and we just consider the \(N-1\) collocation points \({\tau _i}(i = 1,2, \ldots , N)\) for the according collocation equations. Consequently, the collocation equations are given in the compact form:

$$\begin{aligned} (\boldsymbol{D} \otimes {\boldsymbol{I}_n}) \cdot {\mathbf{{\Omega }}_k} = - \boldsymbol{f} \cdot {\mathbf{{\Omega }}_k} \end{aligned}$$
(3.38)

where \({\mathbf{{\Omega }}_k} = {\left[ {{\boldsymbol{W}_k}({\tau _1}),{\boldsymbol{W}_k}({\tau _2}), \ldots {\boldsymbol{W}_k}({\tau _N})} \right] ^T}\); \(\boldsymbol{I}_n\) is an \(n\times n\) identity matrix and \(\boldsymbol{D} \otimes {\boldsymbol{I}_n}\) denotes the Kronecker product of \(\boldsymbol{D}\) and \(\boldsymbol{I}_n\); \(\boldsymbol{D} \in {{\mathbb {R} }^{(N - 1) \times N}}\) is known as the differential matrix. The matrix \(\boldsymbol{D}\) and \(\boldsymbol{f}\) are given by

$$\begin{aligned} \boldsymbol{D} = \left| {\begin{matrix} {{{\dot{L}}_1}({\tau _2})} &{} {{{\dot{L}}_2}({\tau _2})} &{} \cdots &{} {{{\dot{L}}_N}({\tau _2})} \\ {{{\dot{L}}_1}({\tau _3})} &{} {{{\dot{L}}_2}({\tau _3})} &{} \cdots &{} {{{\dot{L}}_N}({\tau _3})} \\ \vdots &{} \vdots &{} \ddots &{} \vdots \\ {{{\dot{L}}_1}({\tau _N})} &{} {{{\dot{L}}_2}({\tau _N})} &{} \cdots &{} {{{\dot{L}}_N}({\tau _N})} \\ \end{matrix} } \right| ,\boldsymbol{f} = \boldsymbol{diag}\left| {\begin{matrix} {{\boldsymbol{0}_{n(N - 1) \times n}}} &{} {} &{} {} &{} {} \\ {} &{} {\boldsymbol{f}_x^T({\tau _2})} &{} {} &{} {} \\ {} &{} {} &{} \ddots &{} {} \\ {} &{} {} &{} {} &{} {\boldsymbol{f}_x^T({\tau _N})} \\ \end{matrix} } \right| \end{aligned}$$
(3.39)

Equation (3.38) can be further simplified by

$$\begin{aligned} \boldsymbol{A}{\mathbf{{\Omega }}_k} = 0 \end{aligned}$$
(3.40)

where \(\boldsymbol{A} = \boldsymbol{f} + (\boldsymbol{D} \otimes \boldsymbol{I}_n) \in {\boldsymbol{R}^{(N - 1)n \times Nn}}\). Equation (3.40) contains \((N-1)n\) linear equations and the same number of unknowns (that is \({\boldsymbol{W}_k}({\tau _i})(i = 1,2,\ldots ,N - 1)\)). Defining the unknown vector as \({\boldsymbol{X}_k} = {\left[ {{\boldsymbol{W}_k}({\tau _1}),{\boldsymbol{W}_k}({\tau _2}), \ldots {\boldsymbol{W}_k}({\tau _{N-1}})} \right] ^T}\), it is easy to obtain \(\mathbf {\Omega }_k = {\left[ {{\boldsymbol{X}_k}^T} \ \ {{\boldsymbol{W}_k}({\tau _N})} \right] ^T}\). Next \(\boldsymbol{A}\) is rearranged as \(\boldsymbol{A} = [\boldsymbol{A}_F,\boldsymbol{A}_N]\), where \(\boldsymbol{A}_F\) and \(\boldsymbol{A}_N\) denote the first \((N-1)n\) columns and the rest n columns of \(\boldsymbol{A}\), respectively. Using these relations, the linear equations can be further expressed as

$$\begin{aligned} \boldsymbol{A}{\mathbf{{\Omega }}_k} = \left[ {{\boldsymbol{A}_F}\boldsymbol{,}{\boldsymbol{A}_N}} \right] \cdot \left[ {\begin{matrix} {{\boldsymbol{X}_k}} \\ {{\boldsymbol{W}_k}{{({\tau _N})}^T}} \\ \end{matrix} } \right] = {\boldsymbol{A}_F}{\boldsymbol{X}_k}{{ + }}{\boldsymbol{A}_N}{\boldsymbol{W}_k}{({\tau _N})^T} = 0 \end{aligned}$$
(3.41)

Assuming that the matrix \(\boldsymbol{A}_F\) is nonsingular, the \(\boldsymbol{X}_k\) is eventually solved by

$$\begin{aligned} {\boldsymbol{X}_k} = - \boldsymbol{A}_F^{ - 1}{\boldsymbol{A}_N} \cdot {\boldsymbol{W}_k}{({\tau _N})^T} \end{aligned}$$
(3.42)

The solution of \({\boldsymbol{X}_k}\) gives the value of kth row of matrix \(\boldsymbol{W}(\tau )\) at the collocation points \({\tau _k}\) (\({\tau _1},{\tau _2}, \ldots ,{\tau _{N - 1}}\)). By repeating the above calculation procedure for each row (\(k = 1,2, \cdots ,p\)), the matrix \(\boldsymbol{W}(\tau )\) at all the collocation points (\({\tau _1},{\tau _2}, \ldots ,{\tau _N}\)) can be obtained.

Subsequently, the sensitivity matrix \({\boldsymbol{B}_s}(t)\) at collocation points can be calculated out according to Eq. (3.15):

$$\begin{aligned} {\boldsymbol{B}_s}({\tau _i}) = {\left. {\boldsymbol{W}({\tau _i}) \cdot \frac{{\partial \boldsymbol{f}(\boldsymbol{X},\boldsymbol{U},t)}}{{\partial \boldsymbol{U}(t)}}} \right| _{t = {\tau _i}}},i = 1,2, \ldots , N \end{aligned}$$
(3.43)

Lastly, the principle of Gaussian quadrature [16] is applied to compute the sensitive matrix \({\boldsymbol{A}_j}\),\({\boldsymbol{B}_\kappa }\) and \({\boldsymbol{R}_{ij}}\):

$$\begin{aligned} \begin{aligned}&{\boldsymbol{A}_j} = \int \limits _{{t_0}}^{{t_f}} {{\boldsymbol{B}_s}(t) \cdot {P_j}(t)dt} = \frac{{{t_f} - {t_0}}}{2}\sum \limits _{i = 1}^N {{\boldsymbol{B}_s}({\tau _i}) \cdot {P_j}({\tau _i})} \cdot {\eta _i}\\&j = 1,2, \ldots ,{N_p} \end{aligned} \end{aligned}$$
(3.44)
$$\begin{aligned} {\boldsymbol{B}_\kappa } \buildrel \Delta \over = - \int \limits _{{t_0}}^{{t_f}} {\left[ {\boldsymbol{W}(t) \cdot \boldsymbol{\dot{X}}(t)} \right] } = - \frac{{{t_f} - {t_0}}}{2}\sum \limits _{i = 1}^N {\boldsymbol{W}({\tau _i}) \cdot \boldsymbol{\dot{X}}({\tau _i})} \cdot {\eta _i} \end{aligned}$$
(3.45)
$$\begin{aligned} \begin{aligned}&{\boldsymbol{R}_{ij}} = \int \limits _{{t_{^0}}}^{{t_f}} {\left[ {{P_i}(t)\boldsymbol{R}(t){P_j}(t)} \right] } \cdot dt = \frac{{{t_f} - {t_0}}}{2}\sum \limits _{k = 1}^N {{P_i}({\tau _k})\boldsymbol{R}({\tau _k}){P_j}({\tau _k}) \cdot {\eta _k}} \\&i,j = 1,2, \ldots ,{N_p} \end{aligned} \end{aligned}$$
(3.46)

where \({\eta _i}\) is the weight coefficient of Gaussian quadrature corresponding to the collocation point \(\tau _i\). In this way, such sensitive matrix is obtained by a set of algebraic operation at very few collocation points.

Remark 3

In the calculation loop for each row of the matrix \(\boldsymbol{W}\), the matrix \(\boldsymbol{A}\) remains unchanged and just need to be computed once, since which only upon to the given collocated points \({\tau _i}(i = 1,2, \ldots ,N)\) and \({\boldsymbol{f}_x}({\tau _i})\). This feature effectively reduces the computational complexity.

Remark 4

Since the spectral sensitivity matrix is directly worked out by the Gaussian quadrature method in Eqs. (3.42)–(3.46) and avoids heavy computational consumption produced by the numerical integration of a series of matrix differential equations, the computational efficiency is improved significantly.

3.2.4 The Implementation Step of IGS-MPSP

The implementation procedure of this approach is provided in Algorithm 1. This method starts from the initial guess for spectral coefficients and terminal time. Then the final output errors and trajectory state are evaluated out. If the tolerance of the output errors is small enough, the desired control sequence is obtained. Otherwise, the corresponding sensitivity matrices are recalculated and the spectral coefficients as well as terminal time are updated. Then, the updated control history is generated, and the output errors are evaluated again. This iterative procedure is repeated until a specified criterion for the terminal output errors is met.

Algorithm 1: IGS-MPSP algorithm

$$\begin{aligned} \begin{array}{l} \hline {\textbf {Step 1: }}\, \text {Initialize the initial guess}\, \boldsymbol{C}_j^0 , t_f^0 ,\, \text{ the } \text{ stopping } \text{ criterions }\, {\delta _y} ,\, \text {the number of spectral}\\ \quad \quad \quad {\text{ function }}\, {N_p} ,\, \text {the number of collocation points}\, N \\ {\textbf {Step 2: For}}\, k = 0,1,2, \ldots \\ \quad \quad (2.1)\, {\text{ Compute } \text{ the } \text{ control } \text{ history }}\, {\boldsymbol{U}^k}(t),t \in [{t_0},{t_f}]\, \text {by Eq.(3.17);}\\ \quad \quad \quad \quad {\text{ Integrate } \text{ the } \text{ system } \text{ dynamic } \text{ to } \text{ obtain } \text{ the } \text{ trajectory } \text{ state } \text{ as } \text{ well } \text{ as } \text{ the } \text{ output } \text{ error, }}\\ \quad \quad \quad \quad {\left\| {d{\boldsymbol{Y}_N}} \right\| _\infty };\\ \quad \quad (2.2)\, \text {If}\, {\left\| {d\boldsymbol{Y}} \right\| _\infty } < {\delta _y} \\ \quad \quad \quad \quad {\text{ Output } \text{ the } \text{ current } \text{ control } \text{ history }}\, {\boldsymbol{U}^k}(t)\, \text {and break the iteration;} \\ \quad \quad \quad \quad {\text{ Otherwise } \text{ continue } \text{ the } \text{ iteration. }}\\ \quad \quad (2.3)\, \text {Compute the sensitive matrix } {\boldsymbol{A}_j},{\boldsymbol{B}_\kappa }\, {\text{ and }}\, {\boldsymbol{R}_{ij}}\, {\text{ according } \text{ to } }\mathrm{{Eq.(3.42)-Eq.(3.46);}}\\ \quad \quad \quad \quad {\text{ Solve } \text{ the } \text{ linear } \text{ equation } \text{(3.32) } \text{ to } \text{ obtain } \text{ the } \text{ updated } \text{ solution; }}\\ \quad \quad (2.4)\, \text {Update the terminal time by}\\ \quad \quad \quad \quad t_f^{l + 1} = t_f^l + \Delta \kappa \cdot (t_f^l - {t_0})\\ \quad \quad (2.5)\, {\text{ Obtain } \text{ the } \text{ update } \text{ coefficient } \text{ vector } }\boldsymbol{C}_j^{k + 1}\, \text {from the updated solution}\, \boldsymbol{\tilde{X}};\\ \hline \end{array} \end{aligned}$$

Since the spectral coefficients have no physical meaning, it’s not straightforward to assign an initial guess with appropriate values. Therefore, the least-squares algorithm is applied to obtain the initial guess of the spectral coefficients when an initial guess of control sequence is provided.

Denoting \(\boldsymbol{\hat{C}}^0 = [\boldsymbol{\hat{C}}_1^0,\boldsymbol{\hat{C}}_2^0, \ldots ,\boldsymbol{\hat{C}}_{{N_p}}^0]\) as the initial guess of the spectral coefficients, as per Eq. (3.17), the control vector represented by the guess \(\boldsymbol{\hat{C}}^0\) at time step \({t_k},k = 1,2, \ldots ,n\) can be written as

$$\begin{aligned} \boldsymbol{\hat{U}}_k^0 = \sum \limits _{j = 1}^{{N_p}} {{P_j}({t_k})\boldsymbol{\hat{C}}_j^0} ,k = 1,2, \ldots , n \end{aligned}$$
(3.47)

In matrix form, Eq. (3.47) can be written as

$$\begin{aligned} {\boldsymbol{\hat{U}}^0} = {\boldsymbol{\hat{C}}^0}\boldsymbol{P} \end{aligned}$$
(3.48)

where \({\boldsymbol{\hat{U}}^0} = [\boldsymbol{\hat{U}}_1^0,\boldsymbol{\hat{U}}_2^0, \ldots ,\boldsymbol{\hat{U}}_n^0]\), and

$$\begin{aligned} \boldsymbol{P} = \left| {\begin{array}{*{20}{c}} {{P_1}({t_1})}&{}{{P_1}({t_2})}&{} \ldots &{}{{P_1}({t_n})}\\ {{P_2}({t_1})}&{}{{P_2}({t_2})}&{} \ldots &{}{{P_2}({t_n})}\\ \vdots &{} \vdots &{} \ddots &{} \vdots \\ {{P_{{N_p}}}({t_1})}&{}{{P_{{N_p}}}({t_2})}&{} \ldots &{}{{P_{{N_p}}}({t_n})} \end{array}} \right| \end{aligned}$$
(3.49)

If the initial control guess sequence \(\boldsymbol{U}^0 = [\boldsymbol{U}_1^0,\boldsymbol{U}_2^0, \ldots ,\boldsymbol{U}_n^0]\) is given, the proper spectral function coefficients \(\boldsymbol{\hat{C}}^0\) are to be found so as to minimize \(|\boldsymbol{\hat{U}}^0 - \boldsymbol{U}^0|\). According to the principle of the least squares, the coefficients are estimated as

$$\begin{aligned} {\boldsymbol{\hat{C}}^0} = {\boldsymbol{U}^0}{\boldsymbol{P}^T} \cdot {(\boldsymbol{P}{\boldsymbol{P}^T})^{-1}} \end{aligned}$$
(3.50)

The initial guess of the spectral coefficients is obtained from Eq. (3.50).

3.3 The Ascent Predictive Guidance Under Thrust Drop Fault

In this subsection, the proposed method is employed to solve the ascent guidance problem of launch vehicle under thrust drop fault. The problem formula is firstly introduced, then the detailed procedure to address this problem is presented.

3.3.1 Problem Formulation

To be solved conveniently, a modified orbital inertial (MOI) coordinate system is firstly defined as follow. As shown in Fig. 3.1, \({{P}_{F}}\) is the position of launch vehicle when the fault occurs; \(P'_F\) is the projection of \({{P}_{F}}\) onto the injected orbital plane; \({{P}_{f}}\) is the nominal injection point. Then, the origin of this modified orbital coordinate (MOC) is located at the center of the earth. The coordinate plane coincides with the injected orbital plane \(O{P'_F}{{P}_{f}}\), in which the axis \({{O}_{e}}Y\) directing to the midpoint of the arc \({P'_F}{{P}_{f}}\) and the axis \({{O}_{e}}X\) perpendicular to the \({{O}_{e}}Y\). Lastly, the axis \({{O}_{e}}Z\) is determined by the right-hand-thread rule.

Fig. 3.1
figure 1

Modified orbital inertial (MOI) coordinate system

Note that the MOI coordinate can be determined by the position of launch vehicle at the time that the fault occurs and the injected orbit information (the inclination \({{i}_{f}}\), longitude of ascending node \({{\Omega }_{f}}\), and injection point for nominal trajectory). The relationship between the Modified orbital inertial (MOI) coordinate and the Earth-centered inertial (ECI) system is given by

$$\begin{aligned} {{\textrm{X}}^{MOI}}=\textrm{M}_{ECI}^{MOI}({{\Omega }_{f}},{{i}_{f}},{{{P}'}_{F}})\cdot {{\textrm{X}}^{ECI}} \end{aligned}$$
(3.51)

where \(\textrm{M}_{ECI}^{MOI}\) denotes the transformation matrix from the ECI coordinate system to the MOI system.

It is considered the thrust drop fault occurs at the second stage of the launch vehicle. In this flight stage, the launch vehicle is assumed to fly out of the dense atmosphere and the aerodynamic forces can be ignored. Therefore, the three-dimensional point-mass dynamic equations of launch vehicles build in the (MOI) coordinate is given as follow:

$$\begin{aligned} {\left\{ \begin{array}{ll} \boldsymbol{\dot{r}}=\boldsymbol{V} \\ \boldsymbol{\dot{V}}={T\cdot {{\boldsymbol{e}}_{T}}}/{m}\;-{\mu \boldsymbol{r}}/{{{r}^{3}}}\; \\ \dot{m}=-{{m}_{e}} \\ \end{array}\right. } \end{aligned}$$
(3.52)

where \(\boldsymbol{r}={{[{{r}_{x}},{{r}_{y}},{{r}_{z}}]}^{T}}\) is the position vector in the MOI coordinate system; \(\boldsymbol{V}={{[{{V}_{x}},{{V}_{y}},{{V}_{z}}]}^{T}}\) is the inertial velocity vector; T is the thrust magnitude, which is considered to be constant; m is the mass of vehicle and \({{m}_{e}}\) is the mass flow rate; \({{\boldsymbol{e}}_{T}}\) denotes the thrust direction vector, which is generally aligned with the body longitudinal axis of the vehicle and can be given by

$$\begin{aligned} {{\boldsymbol{e}}_{T}}={{[\cos \varphi \cos \psi ,\sin \varphi \cos \psi ,-\sin \psi ]}^{T}} \end{aligned}$$
(3.53)

where \(\varphi \) and \(\psi \) are the pitch angle and yaw angle relative to the MOI coordinate system, respectively. The dynamic equations as presented in Eqs. (3.52) and (3.53) can be written as the compact form

$$\begin{aligned} \boldsymbol{\dot{x}}=f(\boldsymbol{x},\boldsymbol{u}) \end{aligned}$$
(3.54)

where \(\boldsymbol{x}={{[{{r}_{x}},{{r}_{y}},{{r}_{z}},{{V}_{x}},{{V}_{y}},{{V}_{z}}]}^{T}}\) is the state vector and \(\boldsymbol{u}={{[\varphi ,\psi ]}^{T}}\) is the control vector of the system.

Remark 5

Since the flight path angle and angle of attack of the launch vehicle is generally small in the second stage, the defined MOI and the according dynamic equation will ensure the pitch angle of the vehicle remain a small value. This can effectively reduce the nonlinearity of the controls as presented in Eq. (3.53) and improve the convergence of the algorithm. Moreover, such definition can simplify the terminal constraint to be introduced later.

3.3.2 Terminal Constraints

It is assumed that the thrust fault of the launch vehicle takes place at the initial time \(t_{0}\), and the corresponding states are given by:

$$\begin{aligned} \boldsymbol{X}({{t}_{0}}) = {{\boldsymbol{X}}_{0}} \end{aligned}$$
(3.55)

The final orbital injection time \(t_{f}\) is constrained by

$$\begin{aligned} {{t}_{f}}\le {{t}_{f, \max }} \end{aligned}$$
(3.56)

where \({{t}_{f,\max }}\) is the maximum burn time of the vehicle, which is determined by the remaining fuel and mass flow rate.

The terminal constraints of ascent guidance are determined by the orbital insertion conditions, which are generally provided by the semi-major axis \({{a}_{f}}\), eccentricity \({{e}_{f}}\), orbital inclination \({{i}_{f}}\), and longitude of ascending node \({{\Omega }_{f}}\). In here, we consider to entry into a circular orbit (\({{e}_{f}}=0\)). Then, the first two conditions can be equivalently described by

$$\begin{aligned} \left\| \boldsymbol{r}({{t}_{f}}) \right\| =r_{f}^{*} \end{aligned}$$
(3.57)
$$\begin{aligned} \left\| \boldsymbol{V}({{t}_{f}}) \right\| =V_{f}^{*} \end{aligned}$$
(3.58)
$$\begin{aligned} {{\boldsymbol{r}}^{T}}({{t}_{f}})\boldsymbol{V}({{t}_{f}})=0 \end{aligned}$$
(3.59)

Moreover, in the modified orbital inertial (MOI) coordinate system, the final two orbital insertion conditions \({{i}_{f}}\) and \({{\Omega }_{f}}\) are equivalent to make the final position vector component \({{r}_{z}}({{t}_{f}})\) and velocity vector component \({{V}_{z}}({{t}_{f}})\) to be zero. Therefore, the terminal constraints of this ascent guidance problem are defined by

$$\begin{aligned} h(\boldsymbol{x}({{t}_{f}})) = \left[ \begin{matrix} \left\| \boldsymbol{r}({{t}_{f}}) \right\| =r_{f}^{*} \\ \left\| \boldsymbol{V}({{t}_{f}}) \right\| =V_{f}^{*} \\ {{\boldsymbol{r}}^{T}}({{t}_{f}})\boldsymbol{V}({{t}_{f}}) \\ {{r}_{z}}({{t}_{f}}) \\ {{V}_{z}}({{t}_{f}}) \\ \end{matrix} \right] =\boldsymbol{0} \end{aligned}$$
(3.60)

Thus, the ascent guidance problem can be organized by

            \(\boldsymbol{P}^{0}\) : find \({{t}_{f}}\textrm{,u}(t),t\in [{{t}_{0}},{{t}_{f}}]\)

            subject to:

$$\begin{aligned} \mathrm {{x}}(t) = \textrm{f}{(}\textrm{x}(t) {,}\textrm{u}(t) {,} t{)} \end{aligned}$$
(3.61)
$$\begin{aligned} \textrm{x}({{t}_{0}})={{\textrm{x}}_{0}} \end{aligned}$$
(3.62)
$$\begin{aligned} h(\textrm{x}({{t}_{f}}))=\textrm{0} \end{aligned}$$
(3.63)
$$\begin{aligned} {{t}_{f}}\le {{t}_{f,\max }} \end{aligned}$$
(3.64)

3.3.3 Solved by the IGS-MPSP

As introduced earlier, the proposed algorithm is able to solve the free-final time guidance problem. However, this algorithm cannot directly handle the inequality constraint as given in Eq. (3.64). Therefore, a numerical trick is additionally conducted to address this constraint.

First, the proposed method is employed to solve the problem \(\boldsymbol{P}^{0}\) in which the constraint (3.64) is omitted. Then the obtained terminal time \({{t}_{f}}\) is checked. If this value is smaller than the maximum \({{t}_{f,\max }}\), it means the solution is feasible and the obtained control history can be used as the renewed commands of the launch vehicle. Otherwise, it implies the solution is infeasible for this problem. That is, the launch vehicle cannot directly entry into the required orbit. In this situation, a new guidance strategy is needed, such as entering into a new parking orbit or transfer orbit. This case is beyond the scope of this work.

The detailed implementation steps are summarized as follow. As presented in Fig. 3.2, the guidance strategy is triggered by a fault detection. Then the current state \({{t}_{0}}\), \({{\textrm{x}}_{0}}\) is employed as the initial state of the proposed algorithm, and the terminal time and control of nominal trajectory is used as the initial guess. Obviously, these controls and terminal time guess cannot steer the launch vehicle to well meet the required terminal states in the presence of thrust drop fault. Thus, the proposed algorithm is iteratively conducted to obtain the updated terminal time and control history. If the updated terminal time is smaller than the maximum value, then the corresponding control history is directly used as the new guidance for the launch vehicle. Otherwise, it implies that the launch vehicle cannot directly inject into the original orbit.

Fig. 3.2
figure 2

Ascent guidance strategy for launch vehicle under thrust drop fault

3.4 Numerical Results

In this section, the numeric simulation is carried out to demonstrate the performance of the proposed method in term of accuracy and computational efficiency. It is considered that the fault of thrust drop occurs at the second stage flight phase of a launch vehicle. The nominal parameters of the launch vehicle are illustrated in Table 3.1. Both the initial conditions (at the fault occurring time) and target orbit’s parameters are given in Table 3.2.

Table 3.1 Nominal parameters of the launch vehicle
Table 3.2 Initial conditions and target orbit

The proposed method is employed to solve such guidance problem when the thrust of the launch vehicle drops to 70% and 80% of the nominal value, respectively. In the algorithm implementation, the six order Legendre polynomials are used as the spectral function of control, and the Legendre-Gauss-Latto (LGL) points are selected as the collocation points in computing the spectral sensitivity matrix. In addition, the number of LGL nodes is taken as 15.

Additionally, a SOCP based method [11] is conducted in here as the comparison of the proposed method. This algorithm takes the minimum fuel consumption as the optimization object, and the classical Euler method is used to discrete the problems. The number of discretization nodes is set to be 50, which is determined by comprehensively considering the solving accuracy and efficiency.

In simulations, the terminal time and control history of nominal trajectory are used as the initial guess of the proposed method and SOCP based method. Moreover, all numerical simulations are implemented in the MATLAB 2021a environment on a personal desktop (Intel i7-8750H, 3.2 GHz). The CVX [17] optimization toolbox with SDPT3 4.0 [18] is employed as the solver of the SOCP based method.

Fig. 3.3
figure 3

Altitude profiles for nominal trajectory and the trajectories obtained by IGS-MPSP

Fig. 3.4
figure 4

Control profiles for the nominal trajectory

3.4.1 The Results by the Proposed Method

The proposed method reaches the required tolerance of terminal conditions by 8 iterations. Figure 3.3 depicts the altitude profiles of the trajectories with the thrust of 70% and 80% nominal value. Additionally, the nominal trajectory and the trajectories obtained by the nominal control in the case of thrust drop are also provided in Fig. 3.3. It can be seen that with the nominal control, the launch vehicle fails to enter into the target orbit under the thrust drop. The proposed method succeeds in regenerating the updated guidance commands (shown in Fig. 3.4) to steer the vehicle into the original target orbit. Specially, the trajectory states for each iteration of the proposed algorithm when the thrust is 80% of nominal value is presented in Fig. 3.5. It clearly reflects that the proposed method reaches the required orbit injection parameters by a few iterations, even if the relatively poor control and terminal time guess (nominal trajectory) are given. At the same time, the orbit injection time is considerably increased as compared to the nominal value, but still smaller than the maximum allowable value. These results demonstrate the effectiveness of the proposed method, which is able to re-plan the ascent trajectory under the thrust drop, and search the appropriate orbit injection time when a relatively accurate guess cannot not be given out.

Fig. 3.5
figure 5

The trajectory for each iteration in the 80% of nominal thrust

3.4.2 Comparison with SOCP Method

Furthermore, the comparison between the proposed method and SOCP based method are provided in Table 3.3, Figs. 3.6 and 3.7.

The Table 3.3 illustrates the terminal mass and terminal time achieved by the proposed algorithm and SOCP based method, respectively. It can be noted that the results obtained by the proposed algorithm are very close to that produced by SOCP method. The deviations are less than 0.025%. This means the proposed method achieves a near optimality for the fuel consumption compared to the SOCP based method. Moreover, the control histories and trajectories for the proposed method and SOCP are depicted in Figs. 3.6 and 3.7. As it can be seen, the control profiles as well as the trajectories by the proposed method are similar to those of SOCP, no matter in the case of 70% of the nominal thrust or 80% of the nominal thrust. This demonstrates the proposed method produces approximate effect of SOCP based method to solve the ascent guidance problem with the thrust drop fault.

Table 3.3 The results of final time and final mass
Fig. 3.6
figure 6

The Control profile for IGS-MPSP and SOCP method

Fig. 3.7
figure 7

Trajectories by various methods

Lastly, the computational efficiency of the proposed method and SOCP method are comparatively investigated by conducting the same simulation case. Figure 3.8 and Table 3.4 present the CPU time consumed by two methods, in which the time elapses for one iteration and the total are shown. It can be seen that, the CPU time consumed by the proposed method is almost one-sixtieth or one-seventieth of that by the SOCP method for one iteration and the total value. This result clearly demonstrates the superiority of the proposed method in the computational speed. Such highly computational efficiency is achieved by a series of careful design such as the spectral representation of control, sensitive matrix computation conducted by collocation method. Hence, the proposed method owns the great potential for online application.

Fig. 3.8
figure 8

CPU time consumed by various methods

3.5 Conclusion

In this chapter, a predictive ascent guidance based on the IGS-MPSP for the thrust drop fault of the launch vehicle is presented. Firstly, an IGS-MPSP method is derived. Compared with the original GS-MPSP method, this approach introduces a scale factor for the time internal as the additional variable to adjust the terminal time. Then, a new sensitive relation for the final time is established. Since the accuracy of the sensitive relation is improved, the approach owns the better performance to search the appropriate final time in the presence of the poor initial guess. Hence, it is more suitable for the ascent guidance problem. Secondly, the application of the proposed method for the ascent guidance problem under the thrust drop fault is detailed introduced. The numerical simulation for a typical case and comparison with the SOCP based method are carried out. The results indicate the effectiveness of the proposed method, which generate approximate results with the SOCP method but with considerably higher computing efficiency.

Table 3.4 The CPU time consumed by various methods