Obstruction results are aided by the generality offered by the topological viewpoint. Constructive results, however, oftentimes require more structure of the problem to be available [12], e.g., in 1983, Artstein [5] and Sontag [52] introduced CLFs; using center manifold theory, Aeyels was one of the first to provide constructive arguments towards smooth stabilization of nonlinear systems [1]; feedback linearization emerged in the late 1980s, as can be found in the monographs [27, 40]; and in the late 1980s backstepping was initiated by Byrnes, Isidori, Kokotović, Tsinias, Saberi, Sontag, Sussmann and others [31]. See also the 1985 and the 2001 surveys on constructive nonlinear control by Kokotović et al. [29, 30].

Regardless of these constructive methods, the last chapter showcased a collection of fundamental topological obstructions to the asymptotic stabilization of subsets of manifolds by means of merely continuous feedback, let alone smooth feedback. Therefore, in this chapter we briefly review a variety of proposed methods to deal with this situation. See for example [20] for a survey from 1995 on handling an assortment of topological obstructions. Since then, the focus shifted from smooth feedback to several manifestations of discontinuous techniques, as highlighted below.

7.1 On Accepting the Obstruction

Motivated by the Poincaré–Hopf theorem, early remarks on almost global asymptotic stabilization can be found in [28]. This notion is intimately related to multistability, however, now a single point is asymptotically stable whereas the rest is not, recall Fig. 1.1(iv). This relates to Question (iii) from Chap. 6, i.e., how does the dynamical system behave outside of the domain of attraction of the attractor under consideration? For example, when stabilizing an isolated equilibrium point on a closed manifold \(\textsf{M}^n\) by means of continuous feedback, the material from Chap. 6 implies that the remaining vector field indices must add up to \(\chi (\textsf{M}^n)-(-1)^n\).

Theoretically handling multistability is usually done via an alteration of classical Lyapunov theory [4, 23], e.g., by passing to the so-called “dual density formulation” as proposed in [44]. There, global requirements are relaxed to almost global requirements. By doing so, topological obstructions are surmounted at the cost of potentially introducing singularities. This is what we saw in Sect. 1.3. This approach is for example illustrated in [3], where the inverted pendulum is almost globally stabilized.

Although almost global stability of some equilibrium point \(p^{\star }\) can be justified by the points not in the domain of attraction of \(p^{\star }\) being of measure zero, this is only true in the idealized setting. For instance, if one needs to take uncertainties or perturbations into account, even if arbitrarily small, this set to avoid remains by no means of measure zero. Hence, this approach cannot be categorized as robust.

7.2 On Time-Varying Feedback

Recall from Example 6.4 that time-varying feedback does not in general allow for overcoming global topological obstructions. Nevertheless, elaborating on the work by Sontag and Sussmann [51], under controllability assumptions, the so-called “return method” as devised by Coron shows that time-varying feedback can overcome some local topological obstructions to the stabilization of equilibrium points [17, 18]. Prior to the work by Coron, Samson showed that the nonholonomic integrator (6.1) can be stabilized by time-varying feedback indeed [39, 48, p. 566]. A more general and explicit approach is described in [41] by Pomet. We follow Sepulchre, Wertz and Campion [50] in providing intuition regarding this matter.

Example 7.1

(On periodic feedback [50]) Consider the nonlinear dynamical control system

$$\begin{aligned} \left\{ \begin{aligned} \dot{x}_1 =&u,\\ \dot{x}_2 =&x_1,\\ \dot{x}_3 =&x_1^3. \end{aligned}\right. \end{aligned}$$
(7.1)

System (7.1) is small-time-locally-controllable at 0 as, after writing (7.1) in standard form \(\dot{x}=f(x)+g(x)u\), one finds that the set

$$\begin{aligned} \{g(x),[f(x),g(x)], [f(x),[f(x),[f(x),g(x)]]]\} \end{aligned}$$

evaluated at 0, spans \(\mathbb {R}^3\simeq T_0\mathbb {R}^3\) [55]. Nonetheless, (7.1) fails to satisfy Brockett’s condition, cf. Theorem 6.1. In particular, see that if \(\mu (x_1,x_2,x_3)\) is any continuous state feedback aimed at asymptotically stabilizing 0, then \(\mu \) cannot vanish on \(A_{\epsilon }=\{x\in \mathbb {R}^3:x_1=0, 0<\Vert x\Vert <\epsilon \}\), for some sufficiently small \(\epsilon >0\). Otherwise, an additional equilibrium point would be introduced. This implies that \(\mu \) cannot change sign on the annulus \(A_{\epsilon }\). Evidently, a change in sign might be necessary to stabilize a neighbourhood around the origin, for example consider \(x(0)=(0,x_2(0),0)\) with either \(x_2(0)>0\) or \(x_2(0)<0\). Hence, time-varying periodic feedback seems a viable option indeed. This, to enforce a (persistent) change in sign. One can construct a similar story regarding Brockett’s nonholonomic integrator (6.1). To provide a hand-waving comment, let the system start from \((0,0,x_3(0))\), with \(x_3(0)\ne 0\). Then, both \(u_1,u_2\) must become nonzero, pushing both \(x_1,x_2\) away from 0. To make sure \(x_1,x_2\) go back to 0, both \(u_1,u_2\) must flip sign, but this zero crossing (easily) induces an equilibrium away from 0.

For more historical context, see the early survey paper [39] or the remarks in [19].

Regarding Sect. 1.3, with the aforementioned in mind, an optimal control cost akin to (1.2) might not be ideal as time is penalized while (time-invariant) continuous globally asymptotically stabilizing feedback is obstructed by \(\chi (\textsf{G})=0\). Moreover, the almost globally asymptotically stabilizing feedback is not robust.

As many examples before, including Example 7.1, highlight, is that some notion of switching is instrumental in overcoming topological obstructions. The next section highlights one of the most influential solution frameworks, discontinuous- and in particular, hybrid control. This framework is also capable of overcoming one of the inherent drawbacks of stabilizing (periodic) time-varying controllers: they are not robust, often slow and thereby costly.

7.3 On Discontinuous Control

As mentioned before, allowing for discontinuities in dynamical control systems is not an immediate remedy for topological obstructions, they can prevail [13, 37, 47]. Yet, under controllability assumptions—fitting to discontinuous solution frameworks, discontinuous feedback frequently allows for stabilization [2, 16, 33, 34], e.g., to control the nonholonomic integrator (6.1), one can consider a sliding-mode controller [9]. Notably, the CLF generalization due to Rifford, allows for a principled approach to designing stabilizing discontinuous feedback laws [45, 46], e.g., an explicit feedback stabilizing the nonholonomic integrator (6.1) is presented in [34]. See also [11] for a numerical study of CLFs with regard to the nonholonomic integrator. It is imperative to remark that under controllability assumptions, one can show that discontinuous stabilization schemes exist that are robust against measurement noise [53]. For an extensive tutorial paper on discontinuous dynamical systems, see [21], in particular the section on different notions of solutions. The survey articles by Clarke, on the other hand, focus more on control theoretic aspects [14, 15].

7.3.1 Hybrid Control Exemplified

Motivated by the modelling of physical systems, e.g., relays, Witsenhausen was one of the early contributors to hybrid control theory [57]. In part due to the aforementioned topological obstructions, however, hybrid control theory became an abstract theory of its own, e.g., see [25, 49, 56]. In this section we will only scratch the surface (Lyapunov-based switching) of what is possible using these techniques, e.g., in general one would discuss differential inclusions, hysteresis and so forth.

Example 7.2

(Example 1 continued) Recall Fig. 1.1(iii), we start by introducing how one could study discontinuous dynamical systems. Consider a hybrid dynamical system on the circle \(\mathbb {S}^1\subset \mathbb {R}^2\)

$$\begin{aligned} \mathcal {H}:\left\{ \begin{aligned} \dot{x}&=f(x),\quad x\in C\\ x^+&= g(x),\quad x\in D \end{aligned}\right. \end{aligned}$$
(7.2)

for \(C\subseteq \mathbb {S}^1\) the flow set and \(D\subseteq \mathbb {S}^1\) the jump set, with g the corresponding jump map. This means that on the set C, the system behaves as the dynamical systems we encountered before, but on D, the state could possibly change in a way that one cannot describe using \(C^0\) vector fields and flows. For instance, a hybrid system might describe a walking robot with the jump map handling the impact with the ground. See [25] for how to go about solutions of hybrid systems, or see [42, Sect. 24.2] for a succinct introduction. Now, parametrizing a hybrid dynamical system in polar coordinates on \(\mathbb {S}^1\), consider \(\dot{\theta }=\sin (\theta )\) under the jump map \(g(\theta )=\varepsilon \) for some \(\varepsilon \in (0,\pi )\) with the jump set being \(D=\{0\}\). Then, by means of the Lyapunov function \(V(\theta )=\cos (\theta )+1\) stability of this hybrid system can be asserted since \(\langle \partial _{\theta }V(\theta ),\dot{\theta }\rangle =-\sin (\theta )^2\) while for all \(\theta \in D\) one has \(V(g(\theta ))<V(\theta )\). This is graphically summarized in Fig. 7.1(i). Again, we refer to [25, 42] for the technicalities of stability in the hybrid context.

Fig. 7.1
figure 1

For Example 7.2, (i) the jump map g and the Lyapunov function V and for Example 7.3, (ii) the family of Lyapunov functions, flow- and jump sets

The approach as taken in Example 7.2 lacks robustness and is too ad-hoc for practical purposes. In the context of stabilization of an equilibrium point \(p^{\star }\in \textsf{M}\), a successful hybrid methodology to overcome topological obstructions is to exploit multiple potential (Lyapunov) functions [10]. The so-called “synergistic” approach uses a family of potential functions \(\mathscr {V}=\{V_i\}_{i\in \mathcal {I}}\), \(V_i:\textsf{M}\rightarrow \mathbb {R}_{\ge 0}\), with \(p^{\star }\) being a critical point of all \(V_i\). The remaining critical points can be different, but if \(q^{\star }\ne p^{\star }\) is a critical point of \(V_j\) there must be a \(k\ne j\) such that \(V_k(q^{\star })<V_j(q^{\star })\). Now, if one switches appropriately between controllers induced by these potential functions, one cannot get trapped at the wrong critical point and hence \(p^{\star }\) will be stabilized.

Example 7.3

(Example 7.2 continued) We will now follow [42, Example 24.5] and consider a hybrid control system on the embedded circle \(\mathbb {S}^1\subset \mathbb {R}^2\) and sketch more formally how the synergistic method works. Consider the control system \(\dot{x}=u\Omega x\), where \(u\in \mathbb {R}\) and \(\Omega \in \textsf{Sp}(2,\mathbb {R})\) such that \(\Omega x\in T_x\mathbb {S}^1=\{v\in \mathbb {R}^2:\langle v,x \rangle =0 \}\). The goal is the same as before, to globally stabilize \(\theta =\pi \), e.g., \(x'=(-1,0)\in \mathbb {R}^2\). We assume to have a smooth potential function \(V:\mathbb {S}^1\rightarrow \mathbb {R}_{\ge 0}\) with two critical points, \(x'\) at its global minimum and some other point \(\bar{x}\) at its global maximum. Then, this potential function V is used to construct the family of potential functions \(\mathscr {V}=\{V_1,V_2\}\), as discussed above, see [35] for the details. To be able to switch between these functions, the state space is augmented with the set \(\{1,2\}\). Now let \(m(x)=\min _{q\in \{1,2\}}V_q(x)\), \(C=\{(x,q)\in \mathbb {S}^1\times \{1,2\}:m(x)+\delta \ge V_q(x)\}\) and \(D=\{(x,q)\in \mathbb {S}^1\times \{1,2\}:m(x)+\delta \le V_q(x)\}\) for some \(\delta >0\), that is, we are in the jump set when the current selection of \(V_q\) is “too large”. Now define the (set-valued) map \(g_q:\mathbb {S}^1\rightrightarrows \{1,2\}\) by \(g_q(x)=\{q\in \{1,2\}:V_q(x)=m(x)\}\) and select the feedback controller, i.e., the input u, as

$$\begin{aligned} \mu (x,q) = - \langle \textrm{grad}\, V_q(x),\Omega x\rangle . \end{aligned}$$
(7.3)

Indeed, see that under (7.3), \(V_q\) is a Lyapunov function for \(x_q'\) on \(\mathbb {S}^1\setminus \{\bar{x}_q\}\) as \(\dot{V}_q = - \langle \textrm{grad}\, V_q(x),\Omega x\rangle ^2 < 0\) for all \(x\in \mathbb {S}^1\setminus \{\bar{x}_q,x_q'\}\). Note, we exploit the embedding \(\mathbb {S}^1\hookrightarrow \mathbb {R}^2\). Summarizing, we have the closed-loop hybrid system

$$\begin{aligned} \mathcal {H}:\left\{ \begin{array}{llll} &{}\dot{x}=\mu (x,q)\Omega x,\, &{}\dot{q}=0,\quad &{}(x,q)\in C\\ &{}x^+ = x,\,&{}q^+=g_q(x),\quad &{}(x,q)\in D \end{array}\right. \end{aligned}$$
(7.4)

For the technicalities, in particular how to define \(\delta \), we refer to [35], see also Fig. 7.1(ii). Also note that by construction the stabilization is robust, which should be contrasted to closed-loop systems as given in Example 7.2.

See for example [36, 38] for constructive results on \(\textsf{SO}(3,\mathbb {R})\), [42] for more on Example 7.3 and material concerning hybrid reinforcement learning and see [6, 54] for the hybrid approach in the context of optimization on manifolds.

A particular instance of a hybrid dynamical system is a switched dynamical system [32], [25, Sects. 1.4 and 2.4], that is, a dynamical system of the form \(\dot{x}=f_{\sigma }(x)\) for some switching signal \(\sigma \) e.g., \(\sigma :\textrm{dom}(f)\rightarrow \{1,2,\dots ,N\}\), \(N\in \mathbb {N}\). As alluded to in Figure 1.1, introducing a switch, that is, a discontinuity, can allow for global stabilization. Indeed, (7.4) is a manifestation of a switched system. See for example [32, pp. 87–88] for a switching-based solution to stabilization of the nonholonomic integrator (6.1). Slightly switching gears, a way to overcome topological obstructions is to construct a set of local controllers such that the union of their respective domains of attraction covers the space \(\textsf{M}\). This does not necessarily result in global stabilization. The aim is rather to rule out instability by employing a suitable switching mechanism. Assuming that these controllers are intended to stabilize contractible sets, using the Lusternik–Schnirelmann category as will be introduced in Sect. 8.4, one can bound the number of required controllers from below. Clearly, on \(\mathbb {S}^1\) one needs at least two of those controllers.

In contrast to almost global stabilization techniques, under controllability assumptions, for compact sets A, there always exists a hybrid controller that renders A locally asymptotically stable, in a robust sense [43]. The intuition being that measurable functions can be approximated by piecewise-continuous functions and converse Lyapunov results can be established for hybrid dynamical systems [24].

7.3.2 Topological Perplexity

As continuous feedback is frequently prohibited, one might be interested in obtaining a lower bound on the “number” of discontinuous actions required to render a set globally asymptotically stable. For the global stabilization of a point p on \(\mathbb {S}^1\), this number is clearly 1 cf. Figure 1.1(iii). Recent work by Baryshnikov and Shapiro sets out to quantify this for generic spaces [7, 8]. The intuition is as follows, consider the desire to stabilize a point p on the torus \(\mathbb {T}^2\) and recall its non-trivial singular homology groups

$$\begin{aligned} H_k(\mathbb {T}^2;\mathbb {Z}) \simeq {\left\{ \begin{array}{ll} \mathbb {Z} \quad &{}\text {if }k\in \{0,2\}\\ \mathbb {Z}\oplus \mathbb {Z}\quad &{}\text {if } k=1 \end{array}\right. }. \end{aligned}$$

To globally stabilize a point, \(\mathbb {T}^2\) must be deformed to a contractible space. Then, roughly speaking, as \(H_1(\mathbb {R}^2;\mathbb {Z})\simeq 0\), this can be achieved, for example, via two one-dimensional cuts. These cuts correspond to the discontinuities one needs to introduce to globally stabilize p on \(\mathbb {T}^2\). Now if one desires to stabilize \(\mathbb {S}^1\hookrightarrow \mathbb {T}^2\), we need a single one-dimensional cut as \(H_1(\mathbb {S}^1\times \mathbb {R};\mathbb {Z})\simeq \mathbb {Z}\). Baryshnikov et al. proposes a method to quantify this approach merely based on topological information, so independent of metrics and coordinates. See also the algorithmic work [22].

In a similar vein, one can consider the work by Gottlieb and Samaranayake on indices of discontinuous vector fields [26]. The intuition being that Definition 1 considers a topological sphere around 0, in that sense there is no difference between Fig. 1.1(iii), (iv), i.e., one considers merely the boundary of a neighbourhood around the discontinuity.