1 Introduction

Feedback linearization is a well-known method to transform a nonlinear control system into an equivalent linear system utilizing nonlinear state transformations and nonlinear feedback such that in the new coordinates the transformed system is linear. Exact feedback linearization requires solving coupled nonlinear partial differential equations (PDEs) which can be very challenging. For systems that do not satisfy the conditions for the existence of exact linearization solutions, approximate linearization methods have been proposed. These approximate methods employ basic numerical algorithms that make obtaining the requisite nonlinear state transformation and nonlinear feedback computationally more tractable, but often make them less desirable when the goal is to obtain analytic solutions applicable to control system design, especially when the nonlinear system model is described parametrically rather than completely numerically.

The control system design challenge driving this work is spacecraft trajectory control system for low-Earth orbit rendezvous utilizing continuous low thrust propulsion [1, 2]. This problem is known to satisfy the necessary and sufficient conditions for the existence of an exact nonlinear state transformation and nonlinear feedback to obtain a linear system equivalent. In fact, there is a known natural solution that can be found (almost by inspection) that requires no nonlinear state transformations and relies entirely on nonlinear feedback to cancel the nonlinearities. However, the question arises as to the existence of other solutions that may provide improved control system performance. In that paper, it was shown that two different exact solutions can have different performance in terms of fuel usage, leading to the possibility of optimization considerations in selecting the desired exact solution. The underlying dynamics describing the motion of the spacecraft driven by orbital mechanics are nonlinear and characterized by parameters such as nominal altitude and orbital rate. This paper is a continuation and extension of [1]. The goal then is to develop a control system design strategy for systems that satisfy the necessary and sufficient conditions for exact feedback linearization that does not require solving coupled nonlinear PDEs. P. Mulhaupt, et al in [3] proposed an approach to find stabilizing control laws. It is based on successive integrations of differential 1-forms utilizing quotient manifold. Another approach presented in [4,5,6]. It provides the explicit linearizing transformation involving the composition and integration of functions. In this paper, the proposed design strategy is computationally efficient by presenting the algorithm in linear matrix form. In addition, it can handle parameterized nonlinear systems while providing insight into a family of solutions. Introducing the null space coefficients explicitly in each step of approximation provides a degree of freedom to the control system designer to satisfy the required design criteria with judicious selection of the null space coefficients. We would like to emphasize and bring attention to the fact that, the linearizing coordinates could have been obtained by other methods, but the emphasis here is on the simplicity of the algebraic solution and generating a family of solutions that allows the control designer to select a judicious choice of exact nonlinear feedback solutions and provides possibilities for optimizing the performance. The new approach can be directly implemented in MATLAB through a symbolic math toolbox.

2 Approximate feedback linearization

2.1 Preliminaries

$$\begin{aligned} {\dot{\varvec{x}}} =\mathbf { f( x)} +\sum _{i=1 }^ {m} \; \mathbf {g}_{i}( \mathbf {x}) {u_i} , \end{aligned}$$
(1)

where \(\mathbf {x} \in \mathbf {\mathfrak {R}}^{n} \) is the state vector, \( {u_i} \in \mathfrak {R}\) for \(i=1,\ldots ,m\) are the control inputs and \(\mathbf {u}:=({u_1} \; {u_2}\; \ldots {u_m}) ^{T}\). Without loss of generality we assume that \( \mathbf {x}^{o}={ \mathbf {0}} \) and that the system is at rest at the nominal operating point (\( \mathbf {x}^{o}, { \mathbf {u}}^o={ \mathbf {0}})\). We assume \( \mathbf {f(x)}\) and \(\mathbf {g}_{i}(\mathbf {x}) \) have continuous derivatives up to the sufficiently desired order over a given domain. Utilizing differential geometry methods, nonlinear control approaches have been developed to transform the nonlinear control system in (1) to a linear system through nonlinear state transformation and nonlinear feedback [7, 8]. An important contribution to control system design was the development of the necessary and sufficient conditions for nonlinear systems to be locally and/or globally transformed to a linear system by a state transformation [9, 10]. The solution requires solving coupled nonlinear PDEs. Tall [8] proposed an approach to compute explicitly the linearizing state and feedback transformation. However, the proposed approach did not describe how to generate a family of solutions. For systems that do not satisfy the exact feedback linearization conditions, approximate feedback linearization methods and their associated necessary and sufficient conditions for applicability were developed [11, 12]. The approach is to expand the nonlinear system in a Taylor series around a nominal point. Then a nonlinear state transformation and nonlinear state feedback are sought such that resulting system is linear in the new state up to the degree of the Taylor series approximation. The solution process involves solving a set of homological equations that can be represented in linear matrix form as

$$\begin{aligned} \mathbf {L} \mathbf {a} = \mathbf { b} , \end{aligned}$$
(2)

where \(\mathbf {a}\) is comprised of all the unknown parameters, \(\mathbf {L}\) and \(\mathbf {b}\) are known and where \(\mathbf {L}\) is generally nonsquare and not full rank. The unknown parameters, \(\mathbf {a}\), are found from

$$\begin{aligned} \mathbf {a}={ \mathbf {L}^{+}}\mathbf {b}+\mathbf {C}\; \mathbf {N}( \mathbf {L}), \end{aligned}$$
(3)

where \({ \mathbf {L}^{+}}\) denotes the pseudo-inverse of \(\mathbf {L}\), \(\mathbf {C}=[{c_i}] \in \mathfrak {R}\) are arbitrary coefficients, and \(\mathbf {N}( \mathbf {L})\) is the null space of \(\mathbf {L}\). For a given matrix \({ \mathbf {L}}\) the solution set of the homogeneous system is a vector space, called the null space of \({ \mathbf {L}}\) [13]. In [14] it is shown that the solution of the homological equation is not unique. As the number of states and inputs increases and as the order of the approximation (\(\rho \)) increases, the magnitude of the linear matrix solution process grows very quickly. For example, with the number of states \(n=4\), the number of inputs \(m=2\), and \(\rho =2\) (a 2nd order approximation), the matrix \(\mathbf {L} \in \mathfrak {R}^{68 \times 72}\). For \(\rho =3\) (a 3rd order approximation), \(\mathbf {L} \in \mathfrak {R}^{208 \times 232}\). As the size of \(\mathbf {L} \) increases, the likelihood of being able to obtain a solution using symbolic computer calculations decreases, hence the ability to successfully design nonlinear feedback controllers for parameterized nonlinear control systems is significantly hindered.

In this paper, we apply the approximate feedback linearization procedure developed by Krener, et al. in a symbolic recursive fashion applied to systems that are known to satisfy the exact linearization restrictive conditions. Most important, we have derived a family of solutions through the use of the null space of the solution. We show how to approach the exact linear solution asymptotically and pursue a family of solutions through the use of the null space of the solution. We show how to approach the exact linear solution asymptotically and pursue a family of solutions through the use of the null space of the solution. The approach presented differs from previous works of Karahan [15] by seeking a coordinate transformation and nonlinear feedback in a symbolic recursive fashion while searching for patterns. Moreover, approximate linearization method is applied to exactly feedback linearizable system to obtain a family of analytic solutions asymptotically. We consider nonlinear systems known to satisfy the necessary and sufficient conditions for exact feedback linearization. These systems also satisfy less restrictive conditions for approximate feedback linearization. In this situation, an appropriately modified set of homological equations represented by linear matrix systems can be solved recursively starting at \(\rho =2\), then advancing to \(\rho =3\), and higher up to the desired order of the approximation. At each step in the solution process the linear matrix system size grows much more slowly. For example, with the number of states \(n=4\), the number of inputs \(m=2\), and \(\rho =2\), the matrix \(\mathbf {L} \in \mathfrak {R}^{68 \times 72}\). For \(\rho =3\), \(\mathbf {L} \in \mathfrak {R}^{140 \times 160}\). So we note that when \(\rho =3\), at most we need to compute the pseudo-inverse matrix of \(\mathbf {L} \in \mathfrak {R}^{140 \times 160}\) dimension instead of, as discussed above, \(\mathbf {L} \in \mathfrak {R}^{208 \times 232}\).

The final nonlinear state transformations and nonlinear feedback are obtained by algebraically reassembling the intermediate solutions up to desired order \(\rho \). This multi-step procedure increases the applicability of symbolic computer calculations as compared to the single-step process, hence increases the likelihood of achieving a successful design for parameterized nonlinear systems. Moreover, at each step in the solution of the linear matrix system, the null space serves as the foundation for creating a family of solutions. The design process ultimately relies on the control system designer to discern patterns in the solutions as they emerge in the recursive application of the approximation algorithm and then appropriately incorporate the null space to obtain a design that can be described analytically.

2.2 Higher degree approximation of control systems

We seek a nonlinear state transformation and nonlinear state feedback for the nonlinear system in (1) such that the transformed system is equivalent to a linear system plus higher degree terms of \({\mathcal {O}} ^ {{\rho } +1} (\mathbf {x,u}) \), where \(\rho \) is the degree of the approximation. Consider the approximation up to the \(\rho \)-th degree. For this purpose, expanding the original nonlinear system in (1) in a Taylor series around \( (\mathbf {x}^{0} ,{\mathbf {u}}^0) \) yields

$$\begin{aligned} {\dot{\mathbf {x}}}=\mathbf {F x }+ {\mathbf { G }} \mathbf {u} + \mathbf {f}^{(2) }( \mathbf {x}) + {\mathbf {g}}^{(1)} ( \mathbf {x}) \mathbf {u} + \cdots , \end{aligned}$$
(4)

where

$$\begin{aligned} \mathbf {F}:={\frac{\partial \mathbf {f}}{\partial \mathbf {x}}} \big |_{\mathbf {x=x}^{0}} , \; {\mathbf {G}}:= \mathbf {g} ({\mathbf {x}}^0),\; \text {and} \; \mathbf {g}:=(\mathbf {g}_{1},\ldots ,\mathbf {g}_{m}). \end{aligned}$$

We assume a state transformation of the form

$$\begin{aligned} \mathbf { z}=\mathbf {T}(\mathbf {x})= \mathbf {x} - \sum _{p=2 }^{\rho } {\phi }^ {(p)} (\mathbf {x}) , \end{aligned}$$
(5)

where \(\mathbf {z} \) are the transformed coordinates and \({{\phi }}^{(p)} (\mathbf {x}) \) is a vector of p degree polynomials in \(\mathbf {x}\) as

$$\begin{aligned} {{\phi }}^{(p)} (\mathbf {x}) =\begin{bmatrix} {\phi }^{(p)}_{1} (\mathbf {x}) \\ {\phi }^{(p)}_{2} (\mathbf {x})\\ \vdots \\ {\phi }^{(p)}_{n} (\mathbf {x})\end{bmatrix}. \end{aligned}$$
(6)

The goal is to find a state transformation such that the transformed system will be linear in the new coordinates as

$$\begin{aligned} {\dot{\mathbf {z}}}=\mathbf { F z }+ {\mathbf { G }}{\mathbf {v} }+ \mathbf {\mathcal {O}}^{\rho +1}(\mathbf {x},\mathbf {u}). \end{aligned}$$
(7)

The new input \(\mathbf { v}\) is given by

$$\begin{aligned} \mathbf {v}= \sum _{p=2 }^{\rho } {{\varvec{\alpha }}}^{(p)} ( \mathbf {x}) + \left( \mathbf {I }+ \sum _{p=2 }^{\rho }{{\varvec{\beta }}}^{(p-1)}( \mathbf {x}) \right) \mathbf {u}, \end{aligned}$$
(8)

where

$$\begin{aligned} {{\varvec{\alpha }}}^{(p)} (\mathbf {x})&=\begin{bmatrix} {{\alpha }}^{(p)}_{1} (\mathbf {x}) \\ {{\alpha }}^{(p)}_{2} (\mathbf {x})\\ \vdots \\ {{\alpha }}^{(p)}_{m} (\mathbf {x})\end{bmatrix} ,\; \;\\ {{\varvec{\beta }}}^{(p-1)} (\mathbf {x})&=\begin{bmatrix} {{\beta }}^ {(p-1)} _{11} (\mathbf {x}) &{} \quad \ldots \quad {{\beta }} ^ {(p-1)}_{1m} (\mathbf {x}) \\ \vdots \\ {{\beta }}^ {(p-1)}_{m1} (\mathbf {x}) &{} \quad \ldots \quad {{\beta }}^ {(p-1)}_{mm} (\mathbf {x}) \end{bmatrix} , \end{aligned}$$

where \({{\varvec{\alpha }}}^{(p)} (\mathbf {x}) \) and \({{\varvec{\beta }}} ^{(p-1)} (\mathbf {x}) \) are comprised of polynomials of degree p and degree \(p-1\), respectively. As shown in [15], if we find \({{\phi }}^ {(p)}(\mathbf {x}), {\varvec{\alpha }}^{(p)} (\mathbf {x})\) and \({ \varvec{\beta }}^{(p-1)} (\mathbf {x}) \) such that

$$\begin{aligned}&[\mathbf {Fx }, {\phi } ^ {(p)} (\mathbf {x})] + \mathbf { G} {{ \varvec{\alpha }}}^{(p)} ( \mathbf {x}) = {\mathbf {f}} ^ {(p)}_{New}(\mathbf {x}) \end{aligned}$$
(9)
$$\begin{aligned}&[\mathbf {G}_{i}{u_i}, { \phi } ^ {(p)} (\mathbf {x})] + \mathbf { G}_{i } {{\beta _{i }}}^{(p-1)} (\mathbf {x}){u_i} ={\mathbf {g}}^{(p-1)} _{iNew}(\mathbf {x}) {u_i} \end{aligned}$$
(10)

are satisfied for \(p=2,\ldots ,\rho \) and \(i=1,\ldots ,m\), where \([\cdot ,\cdot ]\) represents the Lie bracket, \({\mathbf {G}}=[ {\mathbf {G}}_{1},\ldots , {\mathbf {G}}_{i}]\), and

$$\begin{aligned} {\mathbf {f}} ^ {(p)}_{New}(\mathbf {x}) = {\mathbf {f}} ^ {(p)}(\mathbf {x}) - \sum _{j=2 }^ {p-1} \quad \frac{{\partial {\phi }^ {(p-j+1)}} (\mathbf {x}) }{\partial \mathbf {x}} {\mathbf {f }}^ {(j)}(\mathbf {x}) , \end{aligned}$$
(11)

and

$$\begin{aligned} {\mathbf {g}}^{(p-1)} _{iNew}(\mathbf {x}) ={\mathbf {g}}^{(p-1)}_{i} (\mathbf {x}) - \sum _{j=2 }^ {p-1} \quad \frac{\partial {\phi }^ {(p-j+1)} (\mathbf {x}) }{\partial x} {\mathbf {g }}^ {(j-1)}_i(\mathbf {x}), \end{aligned}$$
(12)

the higher-order terms in (4) will vanish yielding

$$\begin{aligned} {\dot{\mathbf {z}}}=\mathbf { F z }+ {\mathbf { G }} \mathbf {v }+ \mathbf {\mathcal {O}}^{\rho +1}(\mathbf {x},\mathbf {u}). \end{aligned}$$
(13)

For example, when \(\rho =3\), (11) and (12) are solved for \(p=2\) as

$$\begin{aligned}&{\mathbf {f}} ^ {(2)}_{New}(\mathbf {x}) = {\mathbf {f}} ^ {(2)}(\mathbf {x}) \end{aligned}$$
(14)
$$\begin{aligned}&{\mathbf {g}}^{(1)} _{iNew}(\mathbf {x}) ={\mathbf {g}}^{(1)}_{i} (\mathbf {x}) \end{aligned}$$
(15)

and for \(p=3\) as

$$\begin{aligned}&{\mathbf {f}} ^ {(3)}_{New}(\mathbf {x}) = {\mathbf {f}} ^ {(3)}(\mathbf {x}) - \frac{{\partial {\phi }^ {(2)}} (\mathbf {x}) }{\partial \mathbf {x}} {\mathbf {f }}^ {(2)}(\mathbf {x}) \nonumber \\&{\mathbf {g}}^{(2)} _{iNew}(\mathbf {x}) ={\mathbf {g}}^{(2)}_{i} (\mathbf {x}) - \frac{\partial {\phi }^ {(2)} (\mathbf {x}) }{\partial x} {\mathbf {g }}^ {(1)}_i(\mathbf {x}) \end{aligned}$$
(16)

3 Exact feedback linearization

3.1 Exact feedback linearization method

Consider the nonlinear system in (1), and assume the nonlinear system satisfies the necessary and sufficient conditions (controllability and involutivity) for exact feedback linearization. Define

$$\begin{aligned} \varvec{\varOmega }_{0}(\mathbf {x})&= \text {span} \{\mathbf {g}_{1} , \ldots , \mathbf {g}_{m}\}\nonumber \\ \varvec{\varOmega }_{1}(\mathbf {x})&= \text {span} \{\mathbf {g}_{1} , \ldots ,\mathbf {g}_{m} , ad_\mathbf {f}^{1} {\mathbf {g}}_{1} , \ldots ,ad_\mathbf {f}^{1} \mathbf {g}_m \} \nonumber \\ \vdots \nonumber \\ \varvec{\varOmega }_{j}(\mathbf {x})&= \text {span }\{{ ad_\mathbf {f}}^{k} \mathbf {g}_{i}, \quad 0 \le k \le j, 1 \le i \le m \} \end{aligned}$$
(17)

for \( j=0,1,\ldots ,n-1\) where \( {ad}^{k}_\mathbf {f } \mathbf {g }= [ \mathbf {f}, ad^{k-1}_\mathbf {f } \mathbf {g}]\). The question of under what conditions can the system in (1) be represented in a linear form using nonlinear state transformations and nonlinear feedback has been very well addressed in the literature. It turns out that the system in (1) is exact feedback linearizable around an equilibrium point if and only if the distribution \(\varvec{\varOmega }_{n-1}\) has dimension n and for each \( 0 \le j \le n-2\), the distribution \(\varvec{\varOmega }_{j} \) is involutive (see [10] for an overview of nonlinear control).

Definition 1

The distribution \(\varvec{\varOmega }_{j} \) is involutive if there exist functions \( c_{k}(\mathbf {x}) \in \mathfrak {R}\) such that

$$\begin{aligned}{}[ ad_\mathbf {f}^{k_1} \mathbf {g}_{i_1} , ad_\mathbf {f}^{k_2} \mathbf {g}_{i_2} ]=\sum _{i=1 }^ {m} \sum _{k=0 }^ {n-2} \;{c_k} \; ad_\mathbf {f}^{k} \mathbf {g}_{i} \end{aligned}$$
(18)

for any \(0 \le {k_1},{k_2} \le n-2 \; \text {and} \; 1 \le {i_1},{i_2} \le m\).

If the conditions for exact feedback linearization are satisfied, then there exists a nonlinear transformation and nonlinear feedback.

We begin by assuming a state transformation of the form

$$\begin{aligned} \mathbf {z}=\mathbf {T}( \mathbf {x}), \end{aligned}$$
(19)

where \(\mathbf {z} \) are the transformed states. Our goal is to find the state transformations and feedback parameters such that the transformed system will be linear in the new state as

$$\begin{aligned} {\dot{\mathbf {z}}}= \mathbf {F z} + {\mathbf { G}} {\mathbf {v}}. \end{aligned}$$
(20)

The nonlinear state feedback input \(\mathbf { u}\) is given by

$$\begin{aligned} \mathbf {u}= \hat{{\varvec{\alpha }}}( \mathbf {x}) + \hat{\varvec{\beta } }( \mathbf {x}) \; \mathbf {v}, \end{aligned}$$
(21)

where \(\mathbf {v} \in \mathfrak {R}^{m}\). Taking the time derivative of \(\mathbf {T}( \mathbf {x})\) in (19) and using (1) and (21) yields

$$\begin{aligned} {\dot{\mathbf {z}}}={\frac{\partial \mathbf {T}}{\partial \mathbf {x}}} \; \mathbf {f}(\mathbf {x})+{\frac{\partial \mathbf {T}}{\partial \mathbf {x}}} {\mathbf {g}}(\mathbf {x}) \big ({\hat{{\varvec{\alpha }}}}(\mathbf {x}) +\hat{\varvec{\beta }}(\mathbf {x}) \; \mathbf {v}\big ). \end{aligned}$$
(22)

Comparing (22) with (20) and (19) we find

$$\begin{aligned}&{\frac{\partial \mathbf {T}}{\partial \mathbf {x}}} \; \mathbf {f}(\mathbf {x})+ {\frac{\partial \mathbf {T}}{\partial \mathbf {x}}} {\mathbf {g}}(\mathbf {x}) {\hat{\varvec{\alpha }}}(\mathbf {x}) = \mathbf {F} \mathbf {T} \end{aligned}$$
(23)
$$\begin{aligned}&{\frac{\partial \mathbf {T}}{\partial \mathbf {x}}} {\mathbf {g}}(\mathbf {x}) {\hat{\varvec{\beta }}} (\mathbf {x}) = {\mathbf {G}}. \end{aligned}$$
(24)

Equations (23)–(24) are a set of coupled nonlinear partial differential equations (PDEs). In general, solving for \(\mathbf {T}(\mathbf {x}) , {\hat{\varvec{\alpha }}}(\mathbf {x}),\) and \( {\hat{\varvec{\beta }}} (\mathbf {x}) \) is challenging when attempting to solve the PDEs directly. An example illustrates this challenge.

Consider the nonlinear system

$$\begin{aligned} {\dot{\mathbf {x}}} =\mathbf {f}(\mathbf {x}) +\mathbf {g}( \mathbf {x}) {u}, \end{aligned}$$
(25)

where \(\mathbf {x}=\begin{bmatrix}{x_1} , {x_2}, {x_3} \end{bmatrix}^{'}\), \( {u} \in \mathfrak {R}\) is the single control input and

$$\begin{aligned}&\mathbf {f}(\mathbf {x})= \begin{bmatrix} x_{2}+x_1^{2}-{x_1}{x_3}\\ x_{3}-{x_1}{x_2}+x_2^{2}+x_3^{2}\\ -3{x_1}+2{x_2}-{x_3}+{x_2}{x_3} \end{bmatrix} \mathrm{and} \nonumber \\&\mathbf {g}(\mathbf {x})=\begin{bmatrix} {x_1}\\ 0\\ 1+{x_2} \end{bmatrix}. \end{aligned}$$

Note that \(\mathbf {x}^{0}=0\), and

$$\begin{aligned}&\mathbf {F}=\begin{bmatrix} 0 &{}1 &{}0\\ 0 &{}0 &{}1 \\ -3 &{} 2&{}-1 \end{bmatrix} , \mathbf {G}=\begin{bmatrix} 0\\ 0 \\ 1 \end{bmatrix}. \end{aligned}$$
(26)

Substituting \(\mathbf {f}(\mathbf {x})\) and \(\mathbf {g}(\mathbf {x})\) from Eq. (25) into (23)–(24) yields

$$\begin{aligned}&\frac{\partial {\mathbf {T}}}{\partial {\mathbf {x}}} \begin{bmatrix} {x_1}\\ 0 \\ 1+{x_2} \end{bmatrix} {\hat{\beta }}(\mathbf {x}) = \begin{bmatrix} 0\\ 0\\ 1 \end{bmatrix} , \nonumber \\&\frac{\partial {\mathbf {T}}}{\partial {\mathbf {x}}} \mathbf {f(x)}+ \frac{\partial {\mathbf {T}}}{\partial {\mathbf {x}}} \mathbf {g(x)} {\hat{\alpha }}= \left[ \begin{array}{c} {T_2}\\ {T_3}\\ -3{T_1}+2{T_2}-{T_3}\\ \end{array}\right] . \end{aligned}$$
(27)

Expanding Eq. (27) we have the six PDEs

$$\begin{aligned}&{\frac{\partial T_1}{\partial x_1}} {x_1} + {\frac{\partial T_1}{\partial x_3}} {(1+{x_2})}=0 \end{aligned}$$
(28)
$$\begin{aligned}&{\frac{\partial T_2}{\partial x_1}} {x_1} + {\frac{\partial T_2}{\partial x_3}} {(1+{x_2})}=0 \end{aligned}$$
(29)
$$\begin{aligned}&\left( {\frac{\partial T_3}{\partial x_1}} {x_1} + {\frac{\partial T_3}{\partial x_3}} {(1+{x_2}) }\right) {\hat{\beta }}(\mathbf {x})=1 \end{aligned}$$
(30)
$$\begin{aligned}&{T}_{2} - \frac{\partial {T}_{1}}{\partial {x_1}}(x_2+{x_1^2}-{x_1}{x_3}) - \frac{\partial {T}_{1}}{\partial {x_2}} ({x_3} -{x_1}{x_2} \nonumber \\&\qquad +{x_2^2}+{x_3^2})-\frac{\partial {T}_{1}}{\partial {x_3}} (-3{x_1}+2{x_2}-{x_3} +{x_2}{x_3})\nonumber \\&\qquad -\left( \frac{\partial {T}_{1}}{\partial {x_1}} {x_1} +\frac{\partial {T}_{1}}{\partial {x_3}} ({x_2}+1) \right) {\hat{\alpha }} (\mathbf {x}) =0 \end{aligned}$$
(31)
$$\begin{aligned}&{T}_{3} - \frac{\partial {T}_{2}}{\partial {x_1}}(x_2+{x_1^2}-{x_1}{x_3})- \frac{\partial {T}_{2}}{\partial {x_2}} ({x_3}-{x_1}{x_2}\nonumber \\&\qquad +{x_2^2}+{x_3^2})-\frac{\partial {T}_{2}}{\partial {x_3}} (-3{x_1}+2{x_2}-{x_3} +{x_2}{x_3})\nonumber \\&\qquad -\left( \frac{\partial {T}_{2}}{\partial {x_1}} {x_1} +\frac{\partial {T}_{2}}{\partial {x_3}} ({x_2}+1) \right) {\hat{\alpha }} (\mathbf {x})=0 \end{aligned}$$
(32)
$$\begin{aligned}&-3{T_1}+2{T_2} -{T_3}- \frac{\partial {T}_{3}}{\partial {x_1}} (x_2+{x_1^2}-{x_1}{x_3}) \nonumber \\&\qquad - \frac{\partial {T}_{3}}{\partial {x_2}} ({x_3}-{x_1}{x_2}+{x_2^2} +{x_3^2})\nonumber \\&\qquad -\frac{\partial {T}_{3}}{\partial {x_3}} (-3{x_1}+2{x_2}-{x_3} +{x_2}{x_3}) \nonumber \\&\qquad +\left( \frac{\partial {T}_{3}}{\partial {x_1}} {x_1} -\frac{\partial {T}_{3}}{\partial {x_3}} ({x_2}+1) \right) {\hat{\alpha }} (\mathbf {x})= 0. \end{aligned}$$
(33)

The complexity of the coupled nonlinear PDEs in (28)–(33) demonstrate that it is often challenging to compute \(\mathbf {T}(\mathbf {x})\), \( {\hat{\alpha }} (\mathbf {x})\), and \({\hat{\beta }}(\mathbf {x})\) even when the underlying nonlinearities in (25) are relatively simple.

An alternative approach to exact feedback linearization by direct solution of the homological equations is to recursively apply the approximate feedback linearization procedure up to order \(\rho \) to the system known to be exactly feedback linearizable and consider what happens as \(\rho \rightarrow \infty \). The problem of approximate feedback linearization proposed by Krener [9] and Karahan [15] is our selected method of choice for finding a transformation and state feedback such that the transformed nonlinear system is linear up to the degree \(\rho \).

Design Approach: Consider a nonlinear system in (1) which satisfies the necessary and sufficient conditions for feedback linearization. The design approach examined here is based on nonlinear state and nonlinear feedback found using a recursive application of the approximate linearization method. At each step in the recursion, all terms up to and including order \(\rho \) are accounted for and eliminated. The structure of the recursion is such that at each step the size of the associated linear matrix system is generally small enough to permit symbolic computations to enable design of parameterized nonlinear systems. Through the use of the null space of the solution we can create a family of exact solutions. The control system designer interacts with the recursive process seeking to discern emerging patterns in the asymptotic approximations considering especially the null space and associated degrees of freedom provided by selectable null space coefficients. We then consider the solution as \(\rho \rightarrow \infty \) and have some confidence that we can discern emerging patterns in the solutions before the computations become unwieldy hindering our ability to obtain an analytic solution.

To see this, first assume the nonlinear system satisfies the necessary and sufficient conditions for exact feedback linearization. Therefore, we know that the state transformation and feedback parameters exist. The input in the exact feedback linearization method from (21) can be written as

$$\begin{aligned} \mathbf {v}={\hat{\varvec{\beta }} }^{-1} (\mathbf {x}) (\mathbf {u}-{\hat{\varvec{\alpha }}}(\mathbf {x})) \end{aligned}$$
(34)

and comparing (8) and (34) leads to the relationships

$$\begin{aligned} -{\hat{\varvec{\beta }}} ^{-1}(\mathbf {x}) \;{\hat{\varvec{\alpha }}}(\mathbf {x}) = \sum _{p=2}^{\rho } {\varvec{\alpha }}^{(p)}(\mathbf {x}) , \end{aligned}$$
(35)

and

$$\begin{aligned} {\hat{\varvec{\beta }}} ^{-1}(\mathbf {x}) = \mathbf {I}+\sum _{p=2}^{\rho } {\varvec{\beta }}^{(p-1)} (\mathbf {x}) , \end{aligned}$$
(36)

where we let \(\rho \rightarrow \infty \). The state transformation and state feedback are given in (19) and (21). Suppose that \(\mathbf { f}(\mathbf {x})\) and \(\mathbf {g}( \mathbf {x}) \) are smooth functions and can be expanded in a Taylor series up to order \(\rho \) ([16]) as

$$\begin{aligned}&\mathbf { f}(\mathbf {x})=\mathbf {F x} +\sum _{p=2 }^ {\rho } \; {\mathbf {f}}^{(p)} ( \mathbf {x}) \end{aligned}$$
(37)
$$\begin{aligned}&\mathbf { g( x)} =\mathbf {G} +\sum _{p=2 }^ {\rho } {\mathbf {g}}^{(p-1)} ( \mathbf {x}). \end{aligned}$$
(38)

Since we assume that the conditions for exact feedback linearization are satisfied, we know that \(\mathbf {T}( \mathbf {x})\) exists. Taking the time derivative of \(\mathbf {T}( \mathbf {x})\) in (19) yields

$$\begin{aligned} {\dot{\mathbf {z}}}={\frac{\partial \mathbf {T}}{\partial \mathbf {x}}} \; \dot{\mathbf {x}}. \end{aligned}$$
(39)

Substituting \({\dot{\mathbf {z}}}\) from (20) and substituting (34) yields

$$\begin{aligned} {\dot{\mathbf {z}}}= \mathbf {F z} + \mathbf {G} \left( {\varvec{\alpha }}(\mathbf {x}) + \big (\mathbf {I}+{\varvec{\beta }}(\mathbf {x})\big ) \mathbf {u}\right) , \end{aligned}$$
(40)

where we note that \({\varvec{\alpha }}(\mathbf {x}):= {\hat{\varvec{\beta }}}^{-1} (\mathbf {x}) {\hat{\varvec{\alpha }}}(\mathbf {x})\) and \(\varvec{\beta }(\mathbf {x}):= {\hat{\varvec{\beta }}}^{-1} (\mathbf {x})\). Comparing (40) with (23)–(24) and using (37) –(38) yields the PDEs

$$\begin{aligned}&{\frac{\partial \mathbf {T}}{\partial \mathbf {x}}} \left( \mathbf {Fx}+\sum _{p=2 }^ {\rho } \; {\mathbf {f}}^{(p)} ( \mathbf {x})\right) - \mathbf {G} {\varvec{\alpha }}(\mathbf {x}) = \mathbf {F T} (\mathbf {x}) \end{aligned}$$
(41)
$$\begin{aligned}&{\frac{\partial \mathbf {T}}{\partial \mathbf {x}}} \left( \mathbf {G} +\sum _{p=2 }^ {\rho } \; {\mathbf {g}}^{(\rho -1)} ( \mathbf {x}) \right) =\mathbf {G} (\mathbf {I}+\varvec{\beta }(\mathbf {x}) ) . \end{aligned}$$
(42)

From the theory of exact feedback linearization, \(\mathbf {T} (\mathbf {x})\) needs to be a smooth differentiable function. It is know that any smooth differentiable function can be represented by a Taylor series (see [2] for a proof). Expanding \(\mathbf {T} (\mathbf {x})\) in the Taylor series, where \({\mathbf {T}}^{(p)} ( \mathbf {x})\) are the higher degree terms in the Taylor series, yields

$$\begin{aligned} \mathbf {T}(\mathbf {x})=\mathbf {x} - \sum _{p=2 }^ {\infty } { \mathbf {T}}^ {(p)} (\mathbf {x}), \end{aligned}$$
(43)

where \(\mathbf {T} (0)=\mathbf {0}\). Note that in order to satisfy (23)–(24), \(\mathbf {T} (\mathbf {x})\) satisfies \({{\partial \mathbf {T}}/{\partial \mathbf {x}}} \big |_{\mathbf {x}=\mathbf {x}^{0}}=\mathbf {I}\), and has the form in (43). Taking the partial derivative of \(\mathbf {T} (\mathbf {x})\) in (43) with respect to \(\mathbf {x}\) we have

$$\begin{aligned} {\frac{\partial \mathbf {T}}{\partial \mathbf {x}}} =\mathbf {I}- \sum _{p=2 }^ {\infty } \frac{\partial {{ \mathbf {T}}^ {(p)}} (\mathbf {x}) }{\partial \mathbf {x}}. \end{aligned}$$
(44)

Define

$$\begin{aligned} \varvec{\alpha } ( \mathbf {x}) := \sum _{p=2 }^ {\rho } {\varvec{\alpha }^{(p)}} ( \mathbf {x}) \quad \text {and} \quad \varvec{\beta } ( \mathbf {x}) := \sum _{p=2 }^ {\rho } {\varvec{\beta }^{(p)}} ( \mathbf {x}) . \end{aligned}$$
(45)

Substituting (44) into (41) and utilizing the Lie bracket yields

$$\begin{aligned}{}[ \mathbf {F x} , \sum _{p=2 }^ {\rho } { \mathbf {T}}^ {(p)} (\mathbf {x}) ]+ \mathbf {G}{\varvec{\alpha }}(\mathbf {x})&= \sum _{p=2 }^ {\rho } {\mathbf {f}}^{(p)} ( \mathbf {x}) \nonumber \\&- \sum _{p=2 }^ {\rho } \frac{\partial {{ \mathbf {T}}^ {(p)}} (\mathbf {x}) }{\partial \mathbf {x}} \sum _{p=2 }^ {\rho } {\mathbf {f}}^{(p)} ( \mathbf {x}), \end{aligned}$$
(46)

where we note that

$$\begin{aligned} \sum _{p=\rho +1 }^ {\infty } \frac{\partial {{ \mathbf {T}}^ {(p)}} (\mathbf {x}) }{\partial \mathbf {x}}&\left( \mathbf {F x}+\sum _{p=2 }^ {\rho } {\mathbf {f}}^{(p)} ( \mathbf {x}) \right) \nonumber \\&+\mathbf {F}\sum _{p=\rho +1 }^ {\infty } { \mathbf {T}}^ {(p)} (\mathbf {x}) =\mathbf {\mathcal {O}}^{(\rho +1)}(\mathbf {x}), \end{aligned}$$
(47)

and consider

$$\begin{aligned} \sum _{p=2 }^ {\rho }&\frac{ \partial {{ \mathbf {T}}^ {(p)}} (\mathbf {x}) }{\partial \mathbf {x}} \sum _{p=2 }^ {\rho } {\mathbf {f}}^{(p)} ( \mathbf {x}) = \nonumber \\&\sum _{j=3}^ {\rho } \sum _{p=2 }^ {j-1} \frac{\partial {{\mathbf {T}}^ {(j-p+1)}} (\mathbf {x})}{\partial \mathbf {x}} \; {\mathbf {f}}^{(p)} ( \mathbf {x}) +\mathbf {\mathcal {O}}^{(\rho +1)} (\mathbf {x}). \end{aligned}$$
(48)

Therefore, we can re-write (46) as

$$\begin{aligned}{}[ \mathbf {F x} , \sum _{p=2 }^ {\rho } { \mathbf {T}}^ {(p)} (\mathbf {x}) ]&+ \mathbf {G}\sum _{p=2 }^ {\rho } {\varvec{\alpha }^{(p)}} ( \mathbf {x}) = \sum _{p=2 }^ {\rho } {\mathbf {f}}^{(p)} ( \mathbf {x}) \nonumber \\&- \sum _{j=3 }^ {\rho } \sum _{p=2 }^ {j-1} \frac{\partial {{ \mathbf {T}}^ {(j-p+1)}} (\mathbf {x}) }{\partial \mathbf {x}} \; {\mathbf {f}}^{(p)} ( \mathbf {x}). \end{aligned}$$
(49)

Expanding (49) and comparing like terms we find

$$\begin{aligned}&[ \mathbf {Fx} , { \mathbf {T}}^ {(2)} (\mathbf {x}) ] + \mathbf {G} {\varvec{\alpha }^{(2)}} (\mathbf {x}) = {\mathbf {f}} ^{(2)} (\mathbf {x}) \end{aligned}$$
(50)
$$\begin{aligned}&[ \mathbf {F x} , { \mathbf {T}}^ {(3)} (\mathbf {x}) ] + \mathbf {G} {\varvec{\alpha }^{(3)}} (\mathbf {x}) = {\mathbf {f}} ^{(3)} (\mathbf {x}) -\frac{\partial {{ \mathbf {T}}^ {(2)}} (\mathbf {x}) }{\partial \mathbf {x}} {\mathbf {f}} ^{(2)} (\mathbf {x}) \end{aligned}$$
(51)
$$\begin{aligned}&\qquad \qquad \vdots \nonumber \\&[ \mathbf {Fx} , { \mathbf {T}}^ {(\rho )} (\mathbf {x}) ] + \mathbf {G} {\varvec{\alpha }^{(\rho )}} (\mathbf {x}) = {\mathbf {f}} ^{(\rho )} (\mathbf {x}) - \nonumber \\&\quad \quad \quad \frac{\partial {{ \mathbf {T}}^ {(\rho -1)}} }{\partial \mathbf {x}} {\mathbf {f}} ^{2} (\mathbf {x}) - \ldots -\frac{\partial {{ \mathbf {T}}^ {(2)}} }{\partial \mathbf {x}} {\mathbf {f}} ^{(\rho -1)} (\mathbf {x}). \end{aligned}$$
(52)

Comparing (50) with the homological equations in (9) we can conclude \({\varvec{\phi }}^ {(\rho )}(\mathbf {x}) \) is equivalent to the \({ \mathbf {T}}^ {(\rho )} (\mathbf {x})\). Since the \({ \mathbf {T}}^ {(\rho )} (\mathbf {x})\) exists, we can conclude \({{\phi }}^ {(\rho )}(\mathbf {x}) \) also exists. From this point forward, we replace \({ \mathbf {T}}^ {(\rho )} (\mathbf {x})\) with \({{\phi }}^ {(\rho )}(\mathbf {x}) \) and note that our solution represents a Taylor series equivalent of an analytic solution up to order \(\rho \). As will be shown, with the appropriate use of the null space we can obtain a family of exact solutions as \(\rho \rightarrow \infty \).

Similarly, we can rewrite (42) as

$$\begin{aligned}&[ \mathbf {G} , {{\phi }}^ {(2)} (\mathbf {x}) ] + \mathbf {G} {\beta ^{(1)}} (\mathbf {x}) = {\mathbf {g}} ^{(1)} (\mathbf {x}) \end{aligned}$$
(53)
$$\begin{aligned}&[ \mathbf {G} , {{\phi }}^ {(3)} (\mathbf {x}) ] + \mathbf {G} {\beta ^{(2)}} (\mathbf {x}) = {\mathbf {g}} ^{(2)} (\mathbf {x}) -\frac{\partial {{ {\phi }}^ {(2)}} (\mathbf {x}) }{\partial \mathbf {x}} {\mathbf {g}} ^{(1)} (\mathbf {x}) \end{aligned}$$
(54)
$$\begin{aligned}&\qquad \qquad \vdots \nonumber \\&[ \mathbf {G} , {{\phi }}^ {(\rho )} (\mathbf {x}) ] + \mathbf {G} {\beta ^{(\rho -1)}} (\mathbf {x}) = {\mathbf {g}} ^{(\rho -1)} (\mathbf {x}) -\nonumber \\&\quad \quad \quad \frac{\partial {{ {\phi }}^ {(\rho -1)}} }{\partial \mathbf {x}} {\mathbf {g}} ^{1} (\mathbf {x}) -\cdots -\frac{\partial {{ {\phi }}^ {(2)}} }{\partial \mathbf {x}} {\mathbf {g}} ^{(\rho -2)} (\mathbf {x}) . \end{aligned}$$
(55)

As described in Krener, et al. [12], at each step in the recursion, we solve for \( {{\phi }}^ {(p)} (\mathbf {x})\), \({\varvec{\alpha }^{(p)}}(\mathbf {x}) \), and \({\varvec{\beta }^{(p-1)}}(\mathbf {x}) \), as \(p=2, 3, \ldots ,\rho \) in (50)–(55). For example, for \(p=2\) we have

$$\begin{aligned}&[ \mathbf {Fx} , {{\phi }}^ {(2)} (\mathbf {x}) ] + \mathbf {G} {\varvec{\alpha }^{(2)}} (\mathbf {x}) = {\mathbf {f}} ^{(2)} (\mathbf {x}) \end{aligned}$$
(56)
$$\begin{aligned}&[ \mathbf {G} , {{\phi }}^ {(2)} (\mathbf {x}) ] + \mathbf {G} {\varvec{\beta }^{(1)}} (\mathbf {x}) = {\mathbf {g}} ^{(1)} (\mathbf {x}) , \end{aligned}$$
(57)

and for \(p=3\), we have

$$\begin{aligned}&[ \mathbf {Fx} , {{\phi }}^ {(3)} (\mathbf {x}) ] + \mathbf {G} {\varvec{\alpha }^{(3)}} (\mathbf {x}) = {\mathbf {f}} ^{(3)} (\mathbf {x}) -\frac{\partial {{{\phi }}^ {(2)}} (\mathbf {x}) }{\partial \mathbf {x}} {\mathbf {f}} ^{(2)} (\mathbf {x}) \end{aligned}$$
(58)
$$\begin{aligned}&[ \mathbf {G} , {{\phi }}^ {(3)} (\mathbf {x}) ] + \mathbf {G} {\varvec{\beta }^{(2)}} (\mathbf {x}) = {\mathbf {g}} ^{(2)} (\mathbf {x}) -\frac{\partial {{{\phi }}^ {(2)}} (\mathbf {x}) }{\partial \mathbf {x}} {\mathbf {g}} ^{(1)} (\mathbf {x}), \end{aligned}$$
(59)

and so on. The solution of the homological equation generally has an associated null space. We consider now the null space in the solution procedure as a pathway to create a family of solutions.

3.2 Null space

Consider the state transformation

$$\begin{aligned} \mathbf {z} =\mathbf {x} - \sum _{p=2 }^ {\rho } {{\phi }}^ {(p)} (\mathbf {x})-\sum _{p=2 }^ {\rho } c_{p-1} {\bar{{\phi }}}^ {(p)} (\mathbf {x}), \end{aligned}$$
(60)

where \({\bar{{\phi }}}^ {(p)} (\mathbf {x})\) represents the terms associated with the null space (assuming a nonempty null space) and where \({c_{p-1}} \in \mathbf {\mathfrak {R}}\) for \(2 \le p \le \rho \) are carefully selected constants (more on this later). Also, suppose that

$$\begin{aligned}&\varvec{\alpha }(\mathbf {x})=\sum _{p=2 }^ {\rho } {\varvec{\alpha }^{(p)}} ( \mathbf {x}) + \sum _{p=2 }^ {\rho } c_{p-1} {{\bar{\varvec{\alpha }}}^{(p)}} ( \mathbf {x}) \end{aligned}$$
(61)
$$\begin{aligned}&\varvec{\beta }(\mathbf {x})=\sum _{p=2 }^ {\rho } {\varvec{\beta }^{(p-1)}} ( \mathbf {x}) +\sum _{p=2 }^ {\rho } c_{p-1} {\bar{\varvec{\beta }}^{(p-1)}}(\mathbf {x}), \end{aligned}$$
(62)

where \({{\bar{\varvec{\alpha }}}^{(p)}} ( \mathbf {x}) \) and \({\bar{\varvec{\beta }}^{(p-1)}} (\mathbf {x}) \) are associated with the null space and \(c_{p-1} \) are the same as in (60). The quantities \({\bar{{\phi }}}^ {(p)} (\mathbf {x})\) , \({{\bar{\varvec{\alpha }}}^{(p)}} ( \mathbf {x}) \) and \({\bar{\varvec{\beta }}^{(p-1)}}(\mathbf {x}) \) are the solutions of

$$\begin{aligned}&[\mathbf {Fx} , \sum _{p=2 }^ {\rho }{\bar{{\phi }}}^ {(p)} (\mathbf {x}) ] + \mathbf {G} \sum _{p=2 }^ {\rho } {{\bar{\varvec{\alpha }}}^{(p)}} ( \mathbf {x}) = \mathbf {0} \end{aligned}$$
(63)
$$\begin{aligned}&[\mathbf {G} , \sum _{p=2 }^ {\rho }{\bar{{\phi }}}^ {(p)} (\mathbf {x}) ] + \mathbf {G} \sum _{p=2 }^ {\rho } {{\bar{\varvec{\beta }}}^{(p-1)}} ( \mathbf {x}) = \mathbf {0}. \end{aligned}$$
(64)

Expanding (63) and (64) for \(p=2\) yields

$$\begin{aligned}&[\mathbf {Fx} , {\bar{{\phi }}}^ {(2)} (\mathbf {x}) ] + \mathbf {G} {{\bar{\varvec{\alpha }}}^{(2)}} ( \mathbf {x}) = \mathbf {0} \end{aligned}$$
(65)
$$\begin{aligned}&[\mathbf {G} , {\bar{{\phi }}}^ {(2)} (\mathbf {x}) ] + \mathbf {G} {{\bar{\varvec{\beta }}}^{(1)}} ( \mathbf {x}) = \mathbf {0}. \end{aligned}$$
(66)

Note that if \({\bar{{\phi }}}^ {(2)} (\mathbf {x})\), \({{\bar{\varvec{\alpha }}}^{(2)}} ( \mathbf {x})\) and \({{\bar{\varvec{\beta }}}^{(1)}}(\mathbf {x})\) satisfy (65) and (66), then \({c_1}{\bar{{\phi }}}^ {(2)} (\mathbf {x})\), \({c_1}{{\bar{\varvec{\alpha }}}^{(2)}} ( \mathbf {x})\), and \( {c_1} {{\bar{\varvec{\beta }}}^{(1)}} (\mathbf {x})\) are also solutions where \({c_1} \in \mathbf {\mathfrak {R}}\). Furthermore, for \(p=3\), we have

$$\begin{aligned}&[\mathbf {Fx} , {\bar{{\phi }}}^ {(3)} (\mathbf {x}) ] + \mathbf {G} {{\bar{\varvec{\alpha }}}^{(3)}} ( \mathbf {x}) =\mathbf {0} \end{aligned}$$
(67)
$$\begin{aligned}&[\mathbf {G} , {\bar{{\phi }}}^ {(3)} (\mathbf {x}) ] + \mathbf {G} {{\bar{\varvec{\beta }}}^{(2)}} ( \mathbf {x}) =\mathbf {0}, \end{aligned}$$
(68)

and \({c_2} {\bar{{\phi }}}^ {(3)} (\mathbf {x}) \), \({c_2} {{\bar{\varvec{\alpha }}}^{(3)}} ( \mathbf {x}) \), and \({c_2}{{\bar{\varvec{\beta }}}^{(2)}}(\mathbf {x})\) also satisfy (67) and (68) where \({c_2} \in \mathbf {\mathfrak {R}}\). By including the \({c_i} \in \mathbf {\mathfrak {R}} \), \(i=1,\ldots ,\rho -1\), we can generate a family of solutions. In fact, with

$$\begin{aligned} {\bar{{\phi }}} (\mathbf {x}) ={c_1}{\bar{{\phi }}}^ {(2)} (\mathbf {x}) + {c_2} {\bar{{\phi }}}^ {(3)} (\mathbf {x}) +\cdots + {c_{{\rho }-1}} {\bar{{\phi }}}^ {(\rho )} (\mathbf {x}), \end{aligned}$$
(69)

we can select \({c_1}, {c_2},\ldots ,{c_{{\rho }-1}}\) to enable the series to converge to different analytic functions.

We consider now the impact of the additional terms \({\bar{{\phi }}}^ {(i)} (\mathbf {x})\) in (60) on the homological equations in (41) and (42). Consider first () where \(\mathbf {T}(\mathbf {x})=\mathbf {z}\) and \(\mathbf {z}\) is given in (60). Utilizing (47) and (48), after some manipulation and re-arranging, we obtain

$$\begin{aligned}&[ \mathbf {Fx} , \sum _{p=2 }^ {\rho } {{\phi }}^ {(p)} (\mathbf {x})] + [ \mathbf {Fx} , \sum _{p=2 }^ {\rho } {\bar{{\phi }}}^ {(p)} (\mathbf {x}) ] \nonumber \\&+ \mathbf {G} \sum _{p=2 }^ {\rho } {\varvec{\alpha }^{(p)}} ( \mathbf {x}) + \mathbf {G} \sum _{p=2 }^ {\rho } {\bar{\varvec{\alpha }}^{(p)}} ( \mathbf {x}) = \sum _{p=2 }^ {\rho } \; {\mathbf {f}}^{(p)} ( \mathbf {x}) \nonumber \\&- \sum _{j=3 }^ {\rho } \sum _{p=2 }^ {j-1} \frac{\partial {{{\phi }}^ {(j-p+1)}} (\mathbf {x}) }{\partial \mathbf {x}} \; {\mathbf {f}}^{(p)} ( \mathbf {x}) \nonumber \\&\quad \quad \quad -\sum _{j=3 }^ {\rho } \sum _{p=2 }^ {j-1} \frac{\partial {\bar{{{\phi }}}^ {(j-p+1)}} (\mathbf {x}) }{\partial \mathbf {x}} \; {\mathbf {f}}^{(p)} ( \mathbf {x}). \end{aligned}$$
(70)

Following the same strategy as before, solve for \(p=2\) in (70) as

$$\begin{aligned}&[ \mathbf {Fx} , {{\phi }}^ {(2)} (\mathbf {x}) ] + \mathbf {G} {\varvec{\alpha }^{(2)}} (\mathbf {x}) = {\mathbf {f}} ^{(2)} (\mathbf {x}) \end{aligned}$$
(71)
$$\begin{aligned}&[ \mathbf {Fx} , {\bar{{\phi }}}^ {(2)} (\mathbf {x}) ] + \mathbf {G} {\bar{\varvec{\alpha }}^{(2)}} (\mathbf {x}) = \mathbf {0} \end{aligned}$$
(72)
$$\begin{aligned}&[ \mathbf {G} , {{\phi }}^ {(2)} (\mathbf {x}) ] + \mathbf {G} {\varvec{\beta }^{(1)}} (\mathbf {x}) = {\mathbf {g}} ^{(1)} (\mathbf {x}) \end{aligned}$$
(73)
$$\begin{aligned}&[ \mathbf {G} , {\bar{{\phi }}}^ {(2)} (\mathbf {x}) ] + \mathbf {G} {\bar{\varvec{\beta }}^{(1)}} (\mathbf {x}) = \mathbf {0}, \end{aligned}$$
(74)

and then, for \(p=3\), we have

$$\begin{aligned}&[ \mathbf {Fx} , {{\phi }}^ {(3)} (\mathbf {x}) ] + \mathbf {G} {\varvec{\alpha }^{(3)}} (\mathbf {x}) = {\mathbf {f}} ^{(3)} (\mathbf {x}) -\frac{\partial {{{\phi }}^ {(2)}} (\mathbf {x}) }{\partial \mathbf {x}} {\mathbf {f}} ^{(2)} (\mathbf {x}) \end{aligned}$$
(75)
$$\begin{aligned}&[ \mathbf {Fx} , {\bar{{\phi }}}^ {(3)} (\mathbf {x}) ] + \mathbf {G} {\bar{\varvec{\alpha }}^{(3)}} (\mathbf {x}) = \mathbf {0} \end{aligned}$$
(76)
$$\begin{aligned}&[ \mathbf {G} , {{\phi }}^ {(3)} (\mathbf {x}) ] + \mathbf {G} {\varvec{\beta }^{(2)}} (\mathbf {x}) = {\mathbf {g}} ^{(2)} (\mathbf {x}) -\frac{\partial {{{\phi }}^ {(2)}} (\mathbf {x}) }{\partial \mathbf {x}} {\mathbf {g}} ^{(1)} (\mathbf {x}) \end{aligned}$$
(77)
$$\begin{aligned}&[ \mathbf {G} , {\bar{{\phi }}}^ {(3)} (\mathbf {x}) ] + \mathbf {G} {\bar{\varvec{\beta }}^{(2)}} (\mathbf {x}) = \mathbf {0}, \end{aligned}$$
(78)

and continue up to the desired degree \(\rho \). The final solution is

$$\begin{aligned}&\mathbf {z} =\mathbf {x} - \sum _{p=2 }^ {\rho } {{\phi }}^ {(p)} (\mathbf {x}) -\sum _{p=2 }^ {\rho } {c_{p-1}}{\bar{{\phi }}}^ {(p)} (\mathbf {x}) \end{aligned}$$
(79)
$$\begin{aligned}&\varvec{\alpha }(\mathbf {x}) =\sum _{p=2 }^ {\rho } {\varvec{\alpha }^{(p)}} ( \mathbf {x}) + \sum _{p=2 }^ {\rho } {c_{p-1}} {{\bar{\varvec{\alpha }}}^{(p)}} ( \mathbf {x}) \end{aligned}$$
(80)
$$\begin{aligned}&\varvec{\beta }(\mathbf {x}) =\sum _{p=2 }^ {\rho } {\varvec{\beta }^{(p-1)}} ( \mathbf {x}) +\sum _{p=2 }^ {\rho } {c_{p-1}} {\bar{\varvec{\beta }}^{(p-1)}}(\mathbf {x}) \end{aligned}$$
(81)

where \({c_j} \in \mathbf {\mathfrak {R}} \ \forall \ 1 \le j \le \rho \).

We investigated the convergence analysis of the null space coefficients in [2] and showed under what conditions the series converge. Utilizing the ratio test, we checked the convergence condition. Control system designer selects the null space coefficients to satisfy the design criteria and convergent series condition.

4 Examples

Example 1) Consider the nonlinear system

$$\begin{aligned} \begin{bmatrix} \dot{x_1}\\ \dot{x_2}\end{bmatrix} = \begin{bmatrix} e^{x_2}-1\\ a {x_1^2}\end{bmatrix} + \begin{bmatrix} 0\\ 1\end{bmatrix} u, \end{aligned}$$
(82)

with \(a \in \mathfrak {R}\) is a parameter that can have various numerical values [17]. This nonlinear system satisfies the conditions of exact feedback linearization. We apply the recursive method to linearize the system up to \(\rho =6\). Begin with \(p=2\) where

$$\begin{aligned} \mathbf {F}=\begin{bmatrix} 0 &{}1 \\ 0 &{}0 \end{bmatrix} , \; \mathbf {G}= \begin{bmatrix} 0 \\ 1 \end{bmatrix} , \; \mathbf {f}^ {(2)}(\mathbf {x})=\begin{bmatrix} \frac{1}{2}{ {x_2^2}} \\ 1 \end{bmatrix},\; \mathbf {g}^ {(1)}(\mathbf {x})=\mathbf {0}. \end{aligned}$$

We are searching for \( { \phi }^{(2)}_{1}(\mathbf {x}) , { \phi }^{(2)}_{2}(\mathbf {x}), {{\alpha }}^{(2)}(\mathbf {x})\) as functions of \(x_{1}^{2},\; {x_1} {x_2} ,\) and \(x_{2}^{2} \). For example, \( { \phi }^2_{1}(\mathbf {x})={a_{11}} x_1^{2}+{a_{12}} {x_1} {x_2} + {a_{13}} x_2^{2},\) where \({a_{1i}}\) are unknowns. For \(\beta ^{(1)} (\mathbf {x})\), we are searching for functions of \({x_1}\) and \({x_2}\) only. With \(p=2\), \(\mathbf {L} \in \mathfrak {R}^{10\times 11}\) with rank (\(N(\mathbf {L}))=1\). Solving (71)–(74) yields

$$\begin{aligned}&{ \phi }^{(2)}_{1}(\mathbf {x}) = 0 \end{aligned}$$
(83)
$$\begin{aligned}&{ \phi }^{(2)}_{2}(\mathbf {x}) = -\frac{1}{2} {x_2^2} \end{aligned}$$
(84)
$$\begin{aligned}&{{\alpha }}^{(2)}(\mathbf {x})=a {x_1^2} \end{aligned}$$
(85)
$$\begin{aligned}&{{\beta }}^{(1)}(\mathbf {x})= {x_2}, \end{aligned}$$
(86)

with the null space

$$\begin{aligned} \mathbf {N} (\mathbf {L})= \Big [ \begin{array}{llllllllllllll} -\frac{1}{2}&0&0&\vdots&0&-1&0&\vdots&0&0&1&\vdots&1&0 \end{array} \Big ], \end{aligned}$$
(87)

which can be interpreted as

$$\begin{aligned} \begin{bmatrix} \bar{ \phi }^{(2)}_{1}(\mathbf {x}) \\ \bar{ \phi }^{(2)}_{2} (\mathbf {x}) \end{bmatrix}= \begin{bmatrix} -\frac{1}{2} &{} 0 &{} 0 \\ 0 &{} -1 &{} 0 \end{bmatrix} \begin{bmatrix}x_1^{2} \\ x_1 x_2 \\ x_2^{2} \end{bmatrix} \end{aligned}$$
(88)

and

$$\begin{aligned}&{ \bar{\alpha }}^{(2)}(\mathbf {x})= \begin{bmatrix} 0&0&1 \end{bmatrix} \begin{bmatrix}x_1^{2} \\ x_1 x_2 \\ x_{2}^{2} \end{bmatrix}\end{aligned}$$
(89)
$$\begin{aligned}&{ \bar{\beta }}^{(1)}(\mathbf {x})= \begin{bmatrix} 1&0 \end{bmatrix} \begin{bmatrix}{x_{1}} \\ x_2 \end{bmatrix}. \end{aligned}$$
(90)

Therefore, from (88) to (90) we have the null space solutions

$$\begin{aligned}&{c_1} \bar{ \phi }^{(2)}_{1}(\mathbf {x}) = -\frac{1}{2} {c_1} {x_1^2} \end{aligned}$$
(91)
$$\begin{aligned}&{c_1}\bar{ \phi }^{(2)}_{2}(\mathbf {x}) = -{c_1}{x_1}{x_2} \end{aligned}$$
(92)
$$\begin{aligned}&{c_1} { \bar{\alpha }}^{(2)}(\mathbf {x})= {c_1}{x_2^2} \end{aligned}$$
(93)
$$\begin{aligned}&{c_1} { \bar{\beta }}^{(1)}(\mathbf {x})= {c_1}{x_1}, \end{aligned}$$
(94)

where \({c_1}\in \mathfrak {R}\) is an arbitrary constant. Next we solve for \(p=3\). Note that the higher degree terms on the right hand side of (75) and (77) depend on the previous transformation, \({\phi }^{(2)}(\mathbf {x})\). The solutions of (75)–(78) with \(p=3\) are

$$\begin{aligned}&{\phi }^{(3)}_{1}(\mathbf {x})=0 \end{aligned}$$
(95)
$$\begin{aligned}&{\phi }^{(3)}_{2}(\mathbf {x})= -\frac{{c_1}}{2} {x_1}{x_2^2} -\frac{1}{6} {x_2^3} \end{aligned}$$
(96)
$$\begin{aligned}&{{\alpha }}^{(3)}(\mathbf {x})=a {c_1}{x_1^3}+ a {x_1^2}{x_2} +{c_1} {x_2^3}, \end{aligned}$$
(97)
$$\begin{aligned}&{{\beta }}^{(2)}(\mathbf {x})= {c_1}{x_1}{x_2}+\frac{1}{2} {x_2^2}, \end{aligned}$$
(98)

where \(c_1\) is the same constant from the previous step and the null space solutions are

$$\begin{aligned}&{c_2}{ \bar{\phi }}^{(3)}_{1}(\mathbf {x}) = -\frac{1}{3} {c_2}{x_1^3} , \ {c_2}{ \bar{\phi }}^{(3)}_{2}(\mathbf {x}) = -{c_2} x_{1}^{2} {x_2} \end{aligned}$$
(99)
$$\begin{aligned}&{c_2}{ \bar{\alpha }}^{(3)}(\mathbf {x})= 2{c_2}{x_{1}}{x_2^2} , \ {c_2}{\bar{\beta }}^{(2)}(\mathbf {x})= {c_2} x_{1}^{2}, \end{aligned}$$
(100)

where \({c_2}\in \mathfrak {R}\) is an arbitrary coefficient. The solutions for \(p=4\) are

$$\begin{aligned}&{\phi }^{(4)}_{1}(\mathbf {x})=\frac{2a}{59} {x_1^4}\end{aligned}$$
(101)
$$\begin{aligned}&{\phi }^{(4)}_{2}(\mathbf {x}) =\frac{8a}{59} {x_1^3} {x_2} -\frac{{c_2}}{2} {x_1^2}{x_2^2} -\frac{c_1}{6} {x_1}{x_2^3}-\frac{1}{24} {x_2^4}\end{aligned}$$
(102)
$$\begin{aligned}&{{\alpha }}^{(4)}(\mathbf {x})=a {c_2}{x_1^4}+ a {c_1}{x_1^3}{x_2} + \frac{11a}{118} {x_1^2}{x_2^2} +2{c_2}\; {x_1}{x_2^3} \nonumber \\&\qquad \; \;+\frac{7 c_1}{12} {x_2^4}+ \frac{11a}{118} {x_1^2}{x_2^2} +2{c_2}\; {x_1}{x_2^3}+\frac{7 c_1}{12} {x_2^4} \end{aligned}$$
(103)
$$\begin{aligned}&{{\beta }}^{(3)}(\mathbf {x})= -\frac{8a}{59} {x_1^3}+{c_2}{x_1^2}{x_2}+\frac{c_1}{2} {x_1}{x_2^2}+\frac{1}{6}{x_2^3} \end{aligned}$$
(104)
$$\begin{aligned}&{c_3} {\bar{\phi }}^{(4)}_{1}(\mathbf {x}) = -\frac{1}{4} {c_3} {x_1^4} \end{aligned}$$
(105)
$$\begin{aligned}&{c_3} {\bar{\phi }}^{(4)}_{2}(\mathbf {x}) = -{c_3} {x_1^3} {x_2} \end{aligned}$$
(106)
$$\begin{aligned}&{c_3} {\bar{\alpha }}^{4}(\mathbf {x}) = 3{c_3}{x_1^2} {{x_2}^2} \end{aligned}$$
(107)
$$\begin{aligned}&{c_3} {\bar{\beta }}^{(3)}(\mathbf {x}) = {c_3} {x_1^3}, \end{aligned}$$
(108)

where \({c_3}\in \mathfrak {R}\) is an arbitrary coefficient and \(c_1\) and \(c_2\) are the same constants from the previous steps. Continuing this process for \(p=5\) and \(p=6\) and combining all the like terms yields

$$\begin{aligned}&{ {\phi }}_{1}(\mathbf {x}) = -\frac{1}{2} {c_1}{x_1^2}-\frac{1}{3} {c_2}{x_1^3} -\frac{1}{4} (c_3 - \frac{8a}{59}) {x_1^4} \nonumber \\&\qquad \qquad \qquad \qquad \qquad - \frac{1}{5} ( c_4 - \frac{50a}{451} c_1 ) x_1^5 + \cdots \end{aligned}$$
(109)
$$\begin{aligned}&{ {\phi }}_{2}(\mathbf {x}) = -\frac{1}{2} {x_2^2}-\frac{1}{6} {x_2^3} -\frac{1}{24} {x_2^4} -\frac{1}{120} x_2^5 - \cdots \nonumber \\&\qquad \qquad - \left( x_2+\frac{1}{2} x_2^2 +\frac{1}{6} x_2^3 + \frac{1}{24} x_2^4 + \frac{1}{120} x_2^5 + \cdots \right) \nonumber \\&\qquad \qquad \times \left( {c_1}{x_1} + {c_2} {x_1^2} + ( {c_3} - \frac{8a}{59}) {x_1^3} + (c_4 - \frac{50 a}{451} c_1 ) x_1^4 \right. \nonumber \\&\qquad \qquad \qquad \qquad \qquad \qquad \left. + (c_5 - \frac{90 a}{973} c_2 ) x_1^5 + \cdots \right) \end{aligned}$$
(110)
$$\begin{aligned}&{\alpha }(\mathbf {x}) = ({c_1}+2{c_2} {x_1})\left( x_2^2+{x_2^3} +\frac{7}{12} {x_2^4} + \frac{1}{4} x_2^5 + \cdots \right) \nonumber \\&\qquad \qquad + a {x_1^2} \left( 1+c_1 x_1+ c_2 x_1^2+ ( c_3-\frac{8 a}{59}) x_1^3 \right. \nonumber \\&\qquad \qquad \qquad \left. +( c_4-\frac{50 a}{451}c_1) x_1^4+(c_5-\frac{90 a}{973}c_2) x_1^5+\cdots \right) \nonumber \\&\qquad \qquad +x_1^2 \left( a {x_2} + (3c_3+\frac{11a}{118}) x_2^2 + (3 c_3-\frac{85 a}{354}) x_2^3 \right. \nonumber \\&\qquad \qquad \qquad \qquad \qquad \qquad \qquad \left. + ( \frac{7}{4} c_3- \frac{277 a}{1416}) x_2^4 +\cdots \right) \nonumber \\&\qquad \qquad + x_1^3 \left( a c_1 x_2+(4 c_4+\frac{51 a}{902}c_1) x_2^2 \right. \nonumber \\&\qquad \qquad \left. +( 4 c_4-\frac{749 a}{2706}c_1) x_2^3 + ( \frac{7}{3} c_4-\frac{783 a}{3608}c_1) x_2^4 +\cdots \right) \nonumber \\&\qquad \qquad + x_1^4 \left( a c_2 x_2 + ( 5 c_5+\frac{73a}{1946} c_2) x_2^2 \right. \nonumber \\&\qquad \quad \left. + ( 5 c_5+\frac{1727a}{5838} c_2) x_2^3 + \cdots \right) +a x_1^5 \left( (c_3 - \frac{8a}{59}) x_2 \right. \nonumber \\&\qquad \qquad \left. + (\frac{11}{414}c_3 - \frac{44a}{12213}) x_2^2 + \cdots \right) \end{aligned}$$
(111)
$$\begin{aligned}&{\beta }(\mathbf {x}) ={x_2}+\frac{1}{2} {x_2^2}+\frac{1}{6}{x_2^3}+\frac{1}{24} x_2^4+ \frac{1}{120}x_2^5 + \cdots \nonumber \\&\qquad \qquad + \left( 1+ {x_2}+\frac{1}{2} {x_2^2} + \frac{1}{6}x_2^3 + \frac{1}{24}x_2^4 + \cdots \right) \times \nonumber \\&\qquad \qquad \left( {c_1}{x_1}+{c_2}{x_1^2}+(c_3-\frac{8a}{59}) {x_1^3}+( c_4-\frac{50 a}{451} c_1) x_1^4 \right. \nonumber \\&\qquad \qquad \qquad \qquad \qquad \qquad \left. + (c_5 - \frac{90a}{973} c_2) x_1^5 + \cdots \right) . \end{aligned}$$
(112)

As shown in [17], the nonlinear system in (82) has an exact feedback linearization solution

$$\begin{aligned}&{z_1}={x_1}, {z_2}=e^{x_2} -1, {\hat{\alpha }(\mathbf {x})}=-a {x_1^2} , {\hat{\beta }(\mathbf {x})} =e^{-x_2}. \end{aligned}$$
(113)

Substituting \(c_1=c_2=c_4=c_5=0 \) and \(c_3={8a}/{59}\) into (109)–(112) yields

$$\begin{aligned}&{ {\phi }}_{1}(\mathbf {x})=0 \end{aligned}$$
(114)
$$\begin{aligned}&{ {\phi }}_{2}(\mathbf {x})=-\frac{1}{2} {x_2^2} -\frac{1}{6} {x_2^3} -\frac{1}{24} {x_2^4}+ \cdots = x_2+1-e^{x_2}\end{aligned}$$
(115)
$$\begin{aligned}&{\alpha }(\mathbf {x}) = a {x_1^2} ( 1+ {x_2}+ \frac{1}{2} {x_2^2} + \cdots ) = a {x_1^2} e^{x_2} \end{aligned}$$
(116)
$$\begin{aligned}&{\beta }(\mathbf {x}) ={x_2}+\frac{1}{2} {x_2^2} +\frac{1}{6}{x_2^3}+ \cdots = e^{x_2}-1 . \end{aligned}$$
(117)

and with (35)–(36) and (60)–(62), we compute

$$\begin{aligned}&z_{1}={x_1} , \; z_{2}=e^{x_2}-1, \;\hat{{\alpha }}(\mathbf {x}) = -a {x_1^2}, \;\hat{{\beta }}(\mathbf {x}) =e^{-x_2} , \end{aligned}$$
(118)

which matches the known analytic solution in (113). Now we will see that other analytic solutions can be obtained by judicious choice of the null space coefficients. For example, by selecting \(c_1=0\), \(c_2=-1 \), \(c_3=8a/59\), \(c_4=1\), and \(c_5=-90a/973\) we find

$$\begin{aligned}&{ {\phi }}_{1}(\mathbf {x})=-\frac{1}{3} {x_1^3} -\frac{1}{5} {x_1^5} + \cdots = x_1-\tan ^{-1} x_1\end{aligned}$$
(119)
$$\begin{aligned}&{ {\phi }}_{2}(\mathbf {x})= -\left( \frac{1}{2} {x_2^2} +\frac{1}{6} {x_2^3} +\frac{1}{24} {x_2^4} + \cdots \right) \nonumber \\&\qquad \qquad \qquad -\left( x_2 +\frac{1}{2} {x_2^2} + \cdots \right) \left( -x_1^2+ x_1^4 + \cdots \right) \nonumber \\&\qquad = x_2-\left( x_2+\frac{1}{2} {x_2^2} +\frac{1}{6} {x_2^3} + \cdots \right) \left( 1-x_1^2+ x_1^4 + \cdots \right) \nonumber \\&\qquad \qquad \qquad \qquad \qquad \qquad = x_2-\frac{1}{1+x_1^2}\left( e^{x_2} -1\right) \end{aligned}$$
(120)
$$\begin{aligned}&{\alpha }(\mathbf {x}) = \nonumber \\&\quad -2 {x_1} \left( 1-2 x_1^2 + \cdots \right) \left( x_2^2+{x_2^3} +\frac{7}{12} {x_2^4} + \frac{1}{4} x_2^5 + \cdots \right) \nonumber \\&\qquad \quad +a x_1^2 \left( 1-x_1^2 + \cdots \right) \left( 1+{x_2} + \frac{1}{2} x_2^2+ \frac{1}{6} x_2^3 +\cdots \right) \nonumber \\&\qquad \qquad \qquad \qquad \qquad =\frac{a {x_1^2}e^{x_2}}{1+x_1^2} -\frac{2 {x_1}{({e}^{x_2} -1)}^{2} }{(1+{x_1^2})^2} \end{aligned}$$
(121)
$$\begin{aligned}&{\beta }(\mathbf {x}) ={x_2}+\frac{1}{2} {x_2^2}+\frac{1}{6}{x_2^3}+\frac{1}{24} x_2^4 + \frac{1}{120}x_2^5 + \cdots \nonumber \\&\qquad \quad + \left( 1+ {x_2}+\frac{1}{2} {x_2^2} + \frac{1}{6}`x_2^3 + \cdots \right) \left( -{x_1^2}+x_1^4 + \cdots \right) \nonumber \\&\quad =-1+\left( 1+x_2+\frac{1}{2} {x_2^2}+\frac{1}{6}{x_2^3} + \cdots \right) \left( -{x_1^2}+x_1^4 + \cdots \right) \nonumber \\&\qquad \qquad \qquad \qquad \qquad \qquad = -1+\frac{1}{1+x_1^2} e^{x_2} \end{aligned}$$
(122)

and, as before, with (35)–(36) and (60)–(62), we compute

$$\begin{aligned}&{z_1}=\tan ^{-1} x_1 \end{aligned}$$
(123)
$$\begin{aligned}&{z_2}=\frac{1}{1+{x_1^2}}\; (e^{x_2} -1) \end{aligned}$$
(124)
$$\begin{aligned}&{\hat{\alpha }}{(\mathbf {x})}=-a {x_1^2}+\frac{2 {x_1}{e^{-x_2}}{({e}^{x_2} -1)}^{2} }{1+{x_1^2}} \end{aligned}$$
(125)
$$\begin{aligned}&{\hat{\beta }}{(\mathbf {x})}={e^{-x_2}}(1+{x_1^2} ). \end{aligned}$$
(126)

As a final example, let \(c_1=-2\), \(c_2=3 \), \(c_3=4(1-{2a}/{59})\), \(c_4=-5(-1+{20a}/{451})\), and \(c_5=-6(1-{45a}/{973})\). Then we can compute yet another analytic solution as

$$\begin{aligned}&{z_1}=\frac{{x_1}}{1+{x_1}} \end{aligned}$$
(127)
$$\begin{aligned}&{z_2}=\frac{1}{(1+{x_1})^2} (e^{x_2} -1) \end{aligned}$$
(128)
$$\begin{aligned}&{\hat{\alpha }}(\mathbf {x})=-a {x_1^2}+\frac{2 {e^{-x_2}}{({e}^{x_2} -1)}}{(1+x _1)} \end{aligned}$$
(129)
$$\begin{aligned}&{\hat{\beta }}{(\mathbf {x})}={e^{-x_2}}(1+{x_1^2} ). \end{aligned}$$
(130)

This example illustrates how the approximate linearization method applied to a nonlinear system known to be exactly feedback linearizable can be employed in conjunction with the null space to obtain a family of analytic solutions asymptotically. With careful analysis of the evolving patterns in the solutions as the order of the approximation increases and with judicious selection of the null space coefficients, three different, yet equally viable, nonlinear state transformations and associated nonlinear feedback functions were obtained. It is worth mentioning that null space coefficients are constants which are selected by the control system designer to meet design criteria. This particular numerical experiment is intended only to demonstrate that there are possibilities for optimizing the performance by judicious choice of exact nonlinear feedback solutions that emerge from the application of the recursive approximate nonlinear method and the selective use of the null space. This is a topic for future investigations.

Example 2 Consider the nonlinear system in (25). We showed that solving the PDEs directly is very challenging. Here we apply the recursive method to find linearizing transformation and feedback parameters. This nonlinear system has only second-degree nonlinear terms. Therefore, we apply the recursive method up to \(\rho = 2\). Similar to Example 1, we are searching for \( { \phi }^{(2)}_{1}(\mathbf {x}) , { \phi }^{(2)}_{2}(\mathbf {x})\), and \({{\alpha }}^{(2)}(\mathbf {x})\) as functions of \(x_{1}^{2},\; {x_1} {x_2},\; {x_1} {x_3} , \;x_{2}^{2},\; {x_1} {x_3} \) and \(x_{3}^{2}\). For \(\beta ^{(1)} (\mathbf {x})\), We are searching for functions of \({x_1}, \;{x_2}\) and \({x_3}\) only. With \(p=2\), \(\mathbf {L} \in \mathfrak {R}^{27\times 27}\) with rank (\(N(\mathbf {L}))=1\). Solving (71)–(74) yields

$$\begin{aligned}&{ \phi }^{(2)}_{1}(\mathbf {x}) =0.197 {x_1^2} + {x_1} {x_3} - 0.5 {x_2^2} \end{aligned}$$
(131)
$$\begin{aligned}&{ \phi }^{(2)}_{2}(\mathbf {x}) =-4 {x_1^2} + 2.3939 {x_1} {x_2} \end{aligned}$$
(132)
$$\begin{aligned}&{ \phi }^{(2)}_{3}(\mathbf {x}) =-7 {x_1} {x_2} +2.3939 {x_1} {x_3} + 1.3939 {x_2^2} - {x_3^2} \end{aligned}$$
(133)
$$\begin{aligned}&{{\alpha }}^{(2)}(\mathbf {x})=-1.4091 {x_1^2} + 7 {x_1} {x_2} -2 {x_1} {x_3} + 7.1061 {x_2^2} \nonumber \\&\qquad \qquad - 0.1818 {x_2} {x_3} - {x_3^2} \end{aligned}$$
(134)
$$\begin{aligned}&{{\beta }}^{(1)}(\mathbf {x})= -2.3939 {x_1} + {x_2} + 2 {x_3}, \end{aligned}$$
(135)

We have the null space solution

$$\begin{aligned}&{c_1} \bar{ \phi }^{(2)}_{1}(\mathbf {x}) = -\frac{1}{2} {c_1}{x_1^2} \end{aligned}$$
(136)
$$\begin{aligned}&{c_1}\bar{ \phi }^{(2)}_{2}(\mathbf {x}) = -{c_1}{x_1}{x_2} \end{aligned}$$
(137)
$$\begin{aligned}&{c_1}\bar{ \phi }^{(2)}_{3}(\mathbf {x}) = -{c_1}{x_1}{x_3} - {c_1}{x_2^2} \end{aligned}$$
(138)
$$\begin{aligned}&{c_1} { \bar{\alpha }}^{(2)}(\mathbf {x})= -\frac{3}{2}{c_1}{x_1^2} + {c_1}{x_2^2} + 3 {c_1}{x_2}{x_3}\end{aligned}$$
(139)
$$\begin{aligned}&{c_1} { \bar{\beta }}^{(1)}(\mathbf {x})= {c_1}{x_1}, \end{aligned}$$
(140)

where \({c_1}\in \mathfrak {R}\) is an arbitrary constant. From () to (62) we can find \(\mathbf {z} , \varvec{\alpha }\) and \(\varvec{\beta }\)

$$\begin{aligned}&z_1=x_1 - 0.197 {x_1^2} - {x_1} {x_3} + 0.5 {x_2^2} + \frac{1}{2} {c_1}{x_1^2} \end{aligned}$$
(141)
$$\begin{aligned}&z_2=x_2 + 4 {x_1^2} - 2.3939 {x_1} {x_2} + {c_1}{x_1}{x_2} \end{aligned}$$
(142)
$$\begin{aligned}&z_3= {x_3} + 7 {x_1} {x_2} -2.3939 {x_1} {x_3} - 1.3939 {x_2^2} + {x_3^2} + {c_1}{x_2^2}\end{aligned}$$
(143)
$$\begin{aligned}&\alpha = -1.4091 {x_1^2} + 7 {x_1} {x_2} -2 {x_1} {x_3} + 7.1061 {x_2^2} \nonumber \\&\qquad - 0.1818 {x_2} {x_3} - {x_3^2} -\frac{3}{2}{c_1}{x_1^2} + {c_1}{x_2^2} + 3 {c_1}{x_2}{x_3}\end{aligned}$$
(144)
$$\begin{aligned}&\beta = -2.3939 {x_1} + {x_2} + 2 {x_3} + {c_1}{x_1}. \end{aligned}$$
(145)

An exact nonlinear transformation is found by substituting \(\mathbf {z}, \alpha \) and \(\beta \) in (19) and (35)–(36). This example demonstrates how to find the nonlinear transformations for nonlinear system for which solving PDE is very challenging. Employing the proposed method, the nonlinear transformations are obtained without solving the PDE equations. In addition, utilizing the null space, a family of analytic solution can be generated. The control system designer will select the \(c_1\) value to meet design criteria. A family of solutions leading to the possibility of optimization considerations in selecting the desired exact solution.

5 Conclusion

We employed a known recursive method to compute the nonlinear transformation and nonlinear feedback for systems that satisfy the exact feedback linearization conditions. The coordinate transformations and nonlinear feedback are obtained symbolically without having to solve the nonlinear PDEs directly. The recursive algorithm is algebraic and computationally easier than solving the set of nonlinear PDEs. We utilize the fact that when the original nonlinear model fulfills the exact feedback linearization conditions, it also satisfies the approximate feedback linearization conditions up to order \(\rho \). We then applied an approximate feedback linearization method recursively to compute the nonlinear transformation and nonlinear feedback. We explored the family of solutions for PDEs using the null space and showed that by judiciously selecting the null space coefficients, we can obtain various solutions that converge to known exact solutions as \(\rho \rightarrow \infty \). The control system designer can select the null space coefficients to meet design criteria and improve the system performance.