A survey of recent results in quantized and event-based nonlinear control

  • Zhong-Ping JiangEmail author
  • Teng-Fei Liu
Survey Paper


Constructive nonlinear control design has undergone rapid and significant progress over the last three decades. In this paper, a review of recent results in this important field is presented with a focus on interdisciplinary topics at the interface of control, computing and communications. In particular, it is shown that the nonlinear small-gain theory provides a unified framework for solving problems of quantized feedback stabilization and event-triggered control for nonlinear systems. Some open questions in quantized and networked nonlinear control systems are discussed.


Nonlinear systems nonlinear control quantized control event-based control small-gain theory 

1 Introduction

Stabilization, a fundamentally important topic in control theory, seeks to find a feedback control law that renders a dynamical system stable at an equilibrium point of interest. Early efforts in tackling this problem for nonlinear systems resulted in nonlinear feedback algorithms that are restricted in their use to certain classes of nonlinear systems. Examples of such algorithms include sliding mode control and absolute stability based methods. For instance, when sliding mode control is applied to design stabilizing controllers, nonlinearities (which may be uncertain) are assumed to fall into the input spanned space, i.e., some kind of matching conditions are required. Latest developments in sliding mode control methods have relaxed this assumption for special classes of nonlinear systems but with limited success for higher-dimensional nonlinear systems with unmatched uncertainties. Global stabilization of nonlinear time-invariant systems differs from its linear counterpart and leads to early results that often assume global Lipschitz condition on system nonlinearities using state and output feedback. With these restrictions of early nonlinear feedback algorithms in mind, linear thinking yields limited success in solving the stabilization problem of nonlinear systems. The development of fundamentally nonlinear feedback design tools has thus become a hot topic in nonlinear control. Starting from the late 1980s followed by the publication of the survey paper[1], significant progress has been made in nonlinear stabilization. Over the last three decades, many innovative ideas and methods have been proposed by numerous researchers for the local, semiglobal and global stabilization of nonlinear systems[2, 3, 4, 5, 6, 7, 8]. Other byproducts of this collective effort by many researchers include advances in other important topics in nonlinear control such as output regulation of nonlinear systems[9, 10, 11, 12, 13, 14, 15], optimal nonlinear control, and output feedback control[16, 17, 18]. Due to space limitation and the limited knowledge of the authors, a tough choice must be made here and we will selectively discuss topics tied to our recent research and cite relevant results which are closely tied to our chosen topics.

The layout of the paper is as follows. Section 2 first states the formulation of the stabilization problem and then reviews some early results in nonlinear control algorithms. Section 3 presents the basics of nonlinear small-gain theory and some recent developments in quantized feedback stabilization of nonlinear systems by means of small-gain theorems. Section 4 focuses on the emerging topic of event-based nonlinear control that aims to update controllers only when some events occur. Both centralized and decentralized systems with event triggers will be discussed. Small-gain based solutions to event-based nonlinear control design will be presented. Finally, Section 5 closes this review article with some brief concluding remarks and open problems.

2 Early results in nonlinear feedback stabilization

2.1 Problem statement

The stabilization problem is concerned with how to use a feedback law to render the closed-loop system asymptotically stable at an equilibrium of interest in the sense of Lyapunov. When the closed-loop system is globally asymptotically stable at the equilibrium, the stabilization problem is called global stabilization. For the sake of simplicity, we will study single-input single-output (SISO) nonlinear control-affine systems described by ordinary differential equations:
$$\begin{array}{*{20}c} {\dot x = f(x) + g(x)u} \\ {y = h(x)} \\ \end{array} $$
where x ε R n is the system state, uR is the control input, and yR is the system output. Notice that many stabilization results presented in this paper (and quoted papers) can be and indeed have been generalized to multi-input multi-output (MIMO) nonlinear systems.
In (1), we assume that f, g and h are sufficiently smooth functions and that f(0) = 0 and h(0) = 0. As a result, the stabilization problem is formulated as follows: When can we find a feedback control law to asymptotically stabilize the system (1) at the origin x = 0? In general, we are interested in either state-feedback controllers of the form
$$u = {\mu (x)}$$
or output-feedback controllers of the form
$$\begin{array}{*{20}c} {\dot \eta = \nu (y,\eta )} \\ {u = \mu (y,\eta ).} \\ \end{array} $$
Like (3), we may also consider dynamic state-feedback control laws instead of (static) state-feedback control laws in (2). Notice that when dim(η)=0, the output-feedback law (3) is often referred to as static output-feedback controller. In this paper, we will omit this topic that is not yet completely resolved, but important, for nonlinear control systems.

In the sequel, we first address the existence of a state-feedback control law for the stabilization of nonlinear system (1). Then, we will present some tools that allow us to construct explicitly stabilizing control laws.

2.2 Explicit design algorithms

Control Lyapunov functions. The term of “control Lyapunov function” (CLF) was introduced by Sontag[19], and was previously studied in the seminal work of Artstein[20] in a broader context of nonlinear systems with arbitrary closed control value sets. A smooth, positive definite, radially unbounded function V: R n R is said to be a CLF for system (1) if the following holds:
$$\mathop {\inf}\limits_{u \in {\rm{R}}} \{{L_f}V(x) + u{L_g}V(x)\} < 0,\quad \forall x \neq 0$$
where L f V and L g V are the Lie derivatives of V along the vector fields f and g, respectively. Clearly, the above condition is equivalent to the following implication:
$${L_g}V(x) = 0\quad \Rightarrow \quad {L_f}V(x) < 0,\quad \forall x \neq 0.$$
It is shown in [19, 20] that the existence of a (global) CLF is necessary and sufficient for the global stabilizability of system (1). Of particular interest is the fact that Sontag gives a “universal” formula for the construction of a globally stabilizing control law.
  • Theorem 1. Assume that V is a smooth CLF for system (1). Then, a feedback control law u = μ(x)that globally asymptotically stabilizes the system takes the following form:
    $$\mu (x) = \left\{ {\begin{array}{*{20}c} { - \frac{{L_f V(x) + \sqrt {\left( {L_f V\left( x \right)} \right)^2 + \left( {L_g V\left( x \right)} \right)^4 } }} {{L_g V(x)}},} \\ {if L_g V(x) \ne 0} \\ {0, otherwise.} \\ \end{array} } \right.$$

Generally speaking, the control law μ in (6) may not be smooth everywhere. It is shown that under certain small control property, the control law μ is “almost smooth”, i.e., is continuous at x = 0 and smooth everywhere else. Such a small control property is defined as follows[19]:

For each ε > 0, there exists a constant δ > 0 such that, for any x satisfying 0 < |x| < δ, there is some u with |u| < ε such that L f V(x) + uL g V(x) < 0.

CLFs have been used widely in the literature of modern nonlinear control, e.g., in adaptive nonlinear control[21, 22, 23], robust nonlinear control[24], nonlinear optimal control[24, 25], nonlinear time-delay systems[26, 27], and multi-agent systems[28], to name only a few.

It should be mentioned that CLF is a generalization of Lyapunov function from a dynamical system without controls to a nonlinear control system. As is well documented in the past literature, the construction of both a Lyapunov function and a CLF for a nonlinear dynamical system is far from being trivial. Nonetheless, for some important classes of nonlinear control systems, tools are available for generating a CLF and a stabilizing control law. Also, it is worth noting that we may construct stabilizing controllers for specific classes of nonlinear systems without assuming the existence of a CLF. We will review some of these existing tools in the remainder of this section.

Backstepping. Integrator backstepping, or backstepping, is a recursive technique[21, 29, 30] that has proved useful in various contexts of stabilization for nonlinear cascaded systems. The term of backstepping was invented by Kokotović[31], although in the early development of nonlinear control theory “adding an integrator” was commonly adopted by European researchers[32, 33, 34]. To begin with, let us consider a single-input nonlinear cascaded system of the form
$${\dot x_1} = {f_1}({x_1},{x_2})$$
$${\dot x_2} = {f_2}({x_1},{x_2}) + u.$$
The essence of backstepping consists of reducing the controller design issue for a higher-order system to a design issue for a reduced-order system. In the above case of cascaded systems, the reduced-order system is the x1-subsystem which is driven by the state of the driving x2-subsystem. Assume that the x1-subsystem, when x2 is regarded as the (virtual) control input, is stabilizable by a smooth control law x2 = μ1 (x1), μ1 (0) = 0, with respect to asmooth CLF V1 (x1). The backstepping approach seeks to generate a stabilizing control law for the cascaded system (7) and (8). As a by-product of backstepping, a CLF will also be obtained for the cascaded system, as shown in the following result.
  • Theorem 2. A control law that globally asymptotically stabilizes the cascade system (7)–(8) takes the form:
    $$\matrix{{u = - {c_2}({x_2} - {\mu _1}({x_1})) - {f_2}({x_1},{x_2}) + {{\partial {\mu _1}} \over {\partial {x_1}}}{f_1}({x_1},{x_2})} \hfill \cr{\quad - {{\partial {V_1}({x_1})} \over {\partial {x_1}}}\int_0^1 {{f_1}} ({x_1},{\mu _1}({x_1}) + \lambda ({x_2} - {\mu _1}({x_1})){\rm{d}}\lambda} \hfill \cr}$$
    where c2 is an arbitrary positive constant. Moreover, \({V_2}({x_1} + {x_2}) = {V_1}({x_1}) + {1 \over 2}{({x_2} - {\mu _1}({x_1}))^2}\) is a CLF for the cascaded system (7) and (8).
Indeed, the proof follows by differentiating the CLF V2 along the solutions of system (7) and (8), i.e.,
$${\dot V_2} = {{\partial {V_1}({x_1})} \over {\partial {x_1}}}{f_1}({x_1},{\mu _1}({x_1})) - {c_2}{({x_2} - {\mu _1}({x_1}))^2}.$$
By hypothesis, \({{\dot V}_2}\) is a negative definite function in (x1, x2).
As explained in [21, 31], backstepping does not assume that the linearization of the cascaded system (7) and (8) is controllable, nor does it assume that the system is feedback linearizable. An elementary example of bilinear system was given in[31]
$${\dot x_1} = {x_1}{x_2},\quad {\dot x_2} = u.$$
When linearizing this bilinear system at x = 0, the obtained linearized model is not controllable. However, by means of backstepping, we can easily obtain a globally asymptotically stabilizing control law. Indeed, selecting \({\mu _1} = - x_1^2\) and \({V_1} = {1 \over 2}x_1^2\), a direct application of (9) yields a control law that globally asymptotically stabilizes the bilinear system as
$$u = - {x_2} - 2x_1^2 - 2x_1^2{x_2}.$$
The initial version of backstepping as outlined in Theorem 2 assumes the precise knowledge of vector fields in the cascaded system (7) and (8). This seemingly drawback has been removed by improved versions of backstepping known as adaptive backstepping[21] and robust backstepping[24]. More interestingly, when only the output information is available to the designer, output-feedback backstepping has been developed based on nonlinear filters or observers[21, 35, 36]. In parallel, passivity-based control plays an important role in constructive nonlinear control design and applications[37, 38, 39].

It should also be mentioned that there have been several research publications devoted to relaxing the smoothness of virtual control laws μ1 and/or the smoothness of system nonlinearities[40, 41, 42].

3 Small-gain method and applications

3.1 Basics of nonlinear small-gain theory

The small-gain method is a tool for constructive nonlinear feedback design particularly suited for nonlinear and interconnected control systems with parametric and dynamic uncertainties[23, 36, 43, 44]. Take the system (7) and (8) as an example. When x1-system is considered as a dynamic uncertainty driven by x2 with unknown state x1 and dynamics f1, conventional Lyapunov designs as presented above are not directly applicable. Additionally, it is not clear how to apply traditional approximation techniques such as neural networks and fuzzy systems theory to approximate “non-linear dynamic uncertainties” represented by f2(x1, x2).

To address this challenge, generalized nonlinear small-gain theorems are developed[43] by means of Sontag’s input-to-state stability (ISS) property (see the tutorial[7]). A non-linear system of the form \(\dot x = f(x,u)\) is said to be input-to-state stable (ISS) with respect to the input uR m if there exist a function β, of class KL, and a function γ of class K, such that, for any initial condition x(0) and any locally bounded input u: R+R m , the solution x(t) is defined for every t ≥ 0 and satisfies
$$\left\vert {x(t)} \right\vert \leq \max \{\beta (\left\vert {x(0)} \right\vert ,t),\,\gamma (\left\Vert {{u_{[0,t]}}} \right\Vert)\}$$
where ∥u[0, t]∥ stands for the L-norm of the truncated function of u over [0, t]. Often, γ is referred to as a gain function of the ISS system. It should be mentioned that the max-based ISS definition is mathematically equivalent to Sontag’s original definition of ISS[45], where the sum operator is used instead of max in (13), with a possibly different pair of (β, γ) functions.

Undoubtedly, ISS has become a fundamental tool for solving many analysis and synthesis problems in nonlinear systems, as documented in the tutorial paper by Sontag[7]. Its important role in advancing the state of the art in robust nonlinear control with respect to dynamic uncertainties has led to the introduction of generalized/nonlinear small-gain theorems. The following provides a quick review.

ISS small-gain theorem states sufficient conditions under which an interconnection of two ISS systems remains to be ISS. More precisely, consider an interconnected system of two ISS subsystems
$${\dot x_1} = {f_1}({x_1},{x_2},v)$$
$${\dot x_2} = {f_2}({x_1},{x_2},v).$$
Assume that each x i -subsystem is ISS in the sense of (13) and has a gain function γ i with respect to the input x j with ji for i, j = 1, 2, as shown in Fig. 1.
  • Theorem 3[43, 46]. Under one of the following equivalent small-gain conditions:
    $${\gamma _1} \circ {\gamma _2}(s) < s,\quad \forall s > 0$$
    $${\gamma _2} \circ {\gamma _1}(s) < s,\quad \forall s > 0$$
    the interconnected system (14) and (15) is ISS when v is considered as the input.

    As well documented in the literature, the global stabilization of the system (7) and (8) with partial-state x2 information can be addressed from a small-gain perspective. The crucial difference with other Lyapunov designs is that system (7) and (8) is now treated as an interconnected system. The only knowledge we need is that the x1-subsystem is ISS with a known ISS-gain, say, γ1 of class K∞. In order to invoke the small-gain theorem, we only need to show that a feedback law of the form u = κ(X2) can be designed to render the x2 system ISS with a gain γ2 that is strictly smaller than \(\gamma _1^{- 1}\) so that the small-gain condition (16) or (17) holds. Such a result is referred to as gain assignment theorem[43]. Theorem 4 states mild assumptions under which the global stabilization problem for system (7) and (8) with incomplete state and dynamics information is solvable.

  • Theorem 4. Assume that the x1-system is ISS with a gain γ1 of class K. It is further assumed that f2 (x1, x2) is dominated by σ1(|x1|)+ σ2(|x2|) with σ i being locally Lipschitz and positive semi-definite. Then, the global stabilization of system (7) and (8) is solvable by continuous partial-state feedback law u = κ(X2).

Fig. 1

An interconnected system with external inputs

The above result was initially developed in [43] and has been applied to various control problems[36, 44, 46, 47].More recently, it has been extended to the context of nonlinear feedback stabilization with quantized signals[48, 49, 50, 51, 52].

3.2 Application to quantized feedback stabilization

The convergence of control and communications has led to many new control problems of practical interest. Quantized stabilization with quantized signals is just one of them. A quantizer is a nonlinear operator that converts a signal from a continuous region to a discrete set of numbers, and thus is a discontinuous function. This special feature poses severe technical challenges to quantized controller design for both linear and nonlinear systems, e.g., [53, 54, 55, 56, 57] for quantized stabilization of linear systems and [58, 59, 60, 61] for extensions to nonlinear systems.

Despite its theoretic importance and practical relevance, quantized feedback stabilization of nonlinear systems has received little attention as of today. There are several technical obstacles one needs to overcome. First and foremost, when quantization is introduced at the levels of output measurement and/or control input, the feedback control law to be implemented will be discontinuous with respect to the state variable. The interplay of discontinuity with the non-linearity and dimensionality of the system leads to an immediate bottleneck of the use of recursive feedback design tools such as backstepping. Second, dynamic quantization is often preferable compared with logarithmic quantization in handling the problem caused by the finite word length of the quantizers in networked control systems. The key idea of dynamic quantization is to adjust dynamically the range of the quantizer through “zooming-in” and “zooming-out” phases. To avoid finite-escape phenomenon during the “zooming-out” phase, a common feature of the existing work[50, 61, 62] is that forward completeness and small-time norm-observability are assumed for the (open-loop) unforced system. Clearly, these assumptions severely limit the class of nonlinear systems we can address for quantized stabilization. Third, the closed-loop system with quantized control is discontinuous and often hybrid. Stability analysis of such systems is still a hot topic of research. Last but not least, when large uncertainty occurs, the quantized feedback control of nonlinear systems is still a little explored research arena.

As an illustration of the above points, let us consider the quantized output-feedback control problem for a class of nonlinear systems transformable into the generalized output-feedback form
$$\dot z = {\Delta _z}(z,y,d)$$
$${\dot x_i} = {x_{i + 1}} + {\Delta _i}(y,z,d),\quad 1 \leq i \leq n$$
$${x_{n + 1}} = {q_\mu}(u)$$
$$y = {x_1}$$
where \(z \in {{\bf{R}}^{{n_z}}}\) and \(x = ({x_1}, \cdots ,{x_n}) \in {{\bf{R}}^n}\) are the unmeasured state variables, yR is the measured output, uR is the control input and \(d \in {{\bf{R}}^{{n_d}}}\) is the (bounded time-varying) disturbance input. q μ is the actuator quantizer with quantization variable μ > 0. Δ z and Δ i , with 1 ≤ in, are locally Lipschitz but unknown nonlinear functions. Very often, the z-dynamics is referred to as dynamic uncertainty in [36, 43, 44.

It should be mentioned that the generalized output-feedback form (18)–(21) was first introduced in [17] in the absence of quantization and disturbance input d, and is an extension of the conventional output-feedback form with only output-nonlinearities[21, 35].

The control objective is to find, if possible, a quantized output-feedback control law that drives the output signal to within an arbitrarily small neighborhood of the origin, while keeping the boundedness of all the closed-loop system signals.

Like the past literature of nonlinear control theory, the following assumptions are made on the generalized output-feedback form system:
  • Assumption 1. The z-system is ISS and has a positive-definite and radially unbounded ISS-Lyapunov function V z that satisfies the implication
    $${V_z}(z) \geq \max \{\gamma _z^y(\left\vert y \right\vert),\;\gamma _z^d(\left\vert d \right\vert)\} \Rightarrow \nabla {V_z}(z)\dot z \leq - {\alpha _z}(\left\vert z \right\vert)$$
    where \(\gamma _z^y,\gamma _z^d\) and α z are class-K functions.
  • Assumption 2. For each i = 1, 2, ⋯, n, the uncertain function Δ i is overbounded by a class-K function \({K_\infty}\), i.e.,
    $$\left\vert {{\Delta _i}(y,z,d)} \right\vert \leq {\psi _{{\Delta _i}}}(\left\vert {(y,z,d)} \right\vert).$$
However, besides the above two commonly used assumptions, to deal with the quantized feedback control problem, we need to introduce two additional assumptions on the system:
  • Assumption 3. The unforced system (18)–(21) with u = 0 is forward complete and small-time norm-observable with y as the output[50,62].

  • Assumption 4. The quantizer q μ satisfies the following property
    $$\left\vert {{q_\mu}(u) - u} \right\vert \leq {\delta _\mu},\quad {\rm{if}}\left\vert u \right\vert \leq M\mu$$
    where M, δ are positive constants, is the range of the quantizer, and δμ is the maximum quantization error for all u in the range of the quantizer. As usual, μ is called “zooming” variable.
As mentioned previously, the discontinuity of quantized feedback control calls for new ideas and controller design tools. Here, we begin with a reduced-order partial-state estimator adapted from[36]
$${\dot \xi _i} = {\xi _{i + 1}} + {L_{i + 1}}y - {L_i}({\xi _2} + {L_2}y),\;2 \leq i \leq n - 1$$
$${\dot \xi _n} = {q_\mu}(u) - {L_n}({\xi _2} + {L_2}y)$$
where L i s are the constants to be determined later.
By direct computation, the time-derivative of the observation error \(\zeta = {({x_2} - {L_2}y - {\xi _2}, \cdots ,{x_n} - {L_n}y - {\xi _n})^{\rm{T}}}\) can be written in compact form
$$\dot \zeta = A\zeta + {\Delta ^{\ast}}(y,z,d)$$
where A is made a Hurwitz matrix thanks to the proper use of the constants L i s, and each component of the vector-valued function Δ* is a linear combination of the functions Δ i , with i = 1, 2, ⋯, n.
As it can be directly checked, the ζ-system (24) is ISS with respect to the inputs y, z and d and has a quadratic ISS-Lyapunov function V ζ = ζ T, where P is the positive-definite and symmetric matrix to solve the Lyapunov matrix equation PA + ATP = −2I n −1. More precisely, using Assumption 2, there exist class-K functions \(\gamma _\zeta ^y,\gamma _\zeta ^z,\gamma _\zeta ^d,{\alpha _\zeta}\) such that
$${V_\zeta}(\zeta) \geq \max \{\gamma _\zeta ^y(\left\vert y \right\vert),\gamma _\zeta ^z(\left\vert z \right\vert),\gamma _\zeta ^d(\left\vert d \right\vert)\} \Rightarrow \nabla {V_\zeta}\dot \zeta \leq - {\alpha _\zeta}(\left\vert \zeta \right\vert).$$
This fact, together with Assumption 1, implies that the cascade-interconnected system comprised of z-system (18) and ζ-system (24) is ISS with respect to the inputs y and d.
With this in mind, we are ready to develop a novel small-gain based quantized output-feedback controller design. The combined controller/observer system takes the following form
$$\dot \zeta = A\zeta + {\Delta ^{\ast}}(y,z,d)$$
$$\dot z = {\Delta _z}(z,y,d)$$
$$\dot y = {\xi _2} + {L_2}y + {\zeta _2} + {\Delta _1}(y,z,d)$$
$${\dot \xi _i} = {\xi _{i + 1}} + {L_{i + 1}}y - {L_i}({\xi _2} + {L_2}y),\;2 \leq i \leq n - 1$$
$${\dot \xi _n} = u + \tilde u - {L_n}({\xi _2} + {L_2}y)$$
where ũ = q μ (u) − u is the input quantization error.

Obviously, system (25)–(29) is a higher-order variant of the system (7)–(8) with x1 = (ζ, z) appended with more than one nonlinear integrators.

The presence of the input quantization error ũ requires a significant modification of the small-gain design method of Section 3.1 and the need of dynamically updating the zooming variable:
$$\mu ({t_{k + 1}}) = Q(\mu ({t_k})),\quad k \in {{\rm{Z}}_{\rm{+}}}$$
where Q represents the dynamic quantization logic and t k +1t k = t d > 0 for kZ+.
Without going into the details, Theorem 5 is proven in [50] which may be seen as an extension of Theorem 4 to the case of actuator quantization:
  • Theorem 5. Under Assumptions 1–4, the quantized output-feedback control problem is solvable for nonlinear systems transformable to the generalized output-feedback form (18)–(21).

  • Remark 1. Under some mild conditions, the controller can be fine tuned to achieve asymptotic convergence of the state signals to the origin. See [63] for some initial results.

4 Event-based nonlinear control

The study of event-triggered control has recently attracted considerable attention within the control systems community. A usually considered event-triggered control system can be viewed as a sampled-data system in which data sampling is triggered by external events depending on the real-time system state, and may not be periodic. Compared with the traditional periodic sampling, event-triggered sampling takes into account the system behavior between the sampling time instants and has been proved to be quite useful in reducing the waste of computation and communication resources in networked control systems. Event-triggered sampling also provides solutions to sampled-data control of nonlinear systems, for which periodic sampling may not work very well.

Significant contributions have been made to the literature of event-triggered control, e.g., [64, 65, 66, 67, 68, 69, 70, 71] and the references therein. Specifically, in [65, 69], impulsive control methods are developed to keep the states of first-order stochastic systems inside certain thresholds. In [72, 73], prediction of the real-time system state between the sampling time instants was employed to generate the control signal, and the prediction is corrected by data-sampling when the difference between the true state and the predicted state is too large. For event-based control of nonlinear systems, Tabuada[67] considered the systems which admit controllers to guarantee the robustness with respect to the sampling errors. Then, the event trigger is designed such that the sampling error is bounded by a specific threshold (depending on the real-time system state) for convergence of the system state. Marchand et al. [74] proposed a universal formula for event-based stabilization of general nonlinear systems affine in the control by extending Sontag’s result for continuous-time stabilization[19]. Tallapragada and Chopra[75] proposed a Lyapunov condition for tracking control of non-linear systems. The designs have been extended to distributed networked control[76, 77], output-feedback control and decentralized control[71] and systems with quantized measurements[78], to name a few. The reader may consult the nice tutorial[79] for the recent developments of event-triggered control and self-triggered control. For practical implementation of event-triggered control, infinitely fast sampling should be avoided, i.e., the intervals between the sampling time instants should be lower bounded by some positive constant[80]. In the context of event-based control, due to the hybrid nature, the forward completeness of the closed-loop system is a complex issue.

By considering an event-triggered control system as an interconnection of the controlled system and the event trigger (as shown in Fig. 2), the small-gain theorem has been applied to event-triggered control of nonlinear systems. Specifically, with the small-gain methods, event-triggering mechanisms can be designed to avoid infinitely fast sampling, and at the same time, to achieve asymptotic stabilization. The forward completeness issue with event-triggered control can be addressed systematically by using ISS small-gain arguments.
Fig. 2

Event-triggered control system as an iterconnection of two subsystems

4.1 Small-gain based event-triggering controllers

An event-triggered state-feedback control system is generally in the following form:
$$\dot x(t) = f(x(t),u(t))$$
$$u(t) = v(x({t_k})),\quad t \in [{t_k},{t_{k + 1}}),\;k \in S \subseteq {{\rm{Z}}_{\rm{+}}}$$
where xR n is the state, uR m is the control input, f: R n × R m →; R n is a locally Lipschitz function representing system dynamics, v: R n R m is a locally Lipschitz function representing the control law. It is assumed that f (0, v(0)) = 0. The time sequence {tk}kS is determined online based on the measurement of the real-time system state. Suppose that x(t) is right maximally defined for all t ∈ [0, Tmax) with 0 < Tmax ≤ ∞. With respect to the possible finite-time accumulation of t k and finite-time divergence of x(t), we consider three cases:
  • Case 1. S = Z+ and limkx t k < ∞, which means Zeno behavior[81].

  • Case 2. S = Z+ and limk→∞ t k = ∞. In this case, x(t) is defined on [0, ∞).

  • Case 3. S is a finite set {0, ⋯, k*} with k*Z+, i.e., there is a finite number of sampling time instants. In this case, \({t_{k\ast}} < {T_{\max}}\) and we set \({t_{k\ast + 1}} = {T_{\max}}\) for convenience of discussions.

It should be noted that, in any case, x(t) is defined for all \(t \in [0,{T_{max}})\). With an appropriate event trigger design, we will show that infkS{t k +1t k } > 0, which means that Case 1 is impossible. Also, by means of small-gain arguments, we will prove that Tmax = ∞ for Case 3.

$$w(t) = x({t_k}) - x(t),\quad t \in [{t_k},{t_{k + 1}}),\;k \in S$$
as the sampling error, and rewrite
$$u(t) = v(x(t) - w(t)).$$
By substituting (32) into (31), we have
$$\dot x(t) = f(x(t),v(x({t_k}))) = :\bar f(x(t),x({t_k})).$$
Then, by using (34), we have
$$\dot x(t) = \bar f(x(t),x(t) + w(t)).$$

If ω(t) is not adjustable, then the event-triggered control problem is reduced to the measurement feedback control problem. The basic idea of event-triggered control is to adjust ω(t) online with an appropriate data-sampling strategy, to realize asymptotic convergence of x(t), if possible. From this point of view, the structure of the closed-loop system can be represented with the block diagram shown in Fig. 2.

By taking the advantage of the interconnection structure, we develop a nonlinear small-gain approach to event-triggered control of nonlinear systems.
  • Assumption 5. System (36) is ISS with ω as the input, i.e., there exist \(\beta \in {\cal K}{\cal L}\) and \(\gamma \in {\cal K}\) such that for any initial state x(0) and any measurable and locally essentially bounded ω, it holds that
    $$\left\vert {x(t)} \right\vert \leq \max \{\beta (\left\vert {x(0)} \right\vert ,t),\gamma ({\left\Vert w \right\Vert _\infty})\}$$
    for all t ≥ 0.
By using the small-gain theorem, under Assumption 5, if the event trigger is designed such that |w(t)|≤ ρ(|x(t)|) for all t ≥ 0 with \(\rho \in {\cal K}\) satisfying
$$\rho ^\circ \gamma < {\rm{Id}}$$
then x(t) asymptotically converges to the origin. Based on this idea, the event trigger considered in this paper can be defined as follows: if x(t k ) =0, then
$${t_{k + 1}} = \inf \{t > {t_k}:\rho (\left\vert {x(t)} \right\vert) - \left\vert {x(t) - x({t_k})} \right\vert = 0\} .$$

The data sampling event is not triggered if for some specific \({k^{\ast}} \in {{\bf{Z}}_ +},\;x({t_{k\ast}}) = 0\) or \(\{t > {t_k}:\rho (\vert x(t)\vert) - \vert x(t) - x({t_k})\vert = 0\} = \phi\). Note that, under the assumption of \(f(0,v(0)) = 0,\;if\;x({t_{{k^{\ast}}}}) = 0\), then \(u(t) = v(x({t_{{k^{\ast}}}})) = 0\) keeps the system state at the origin for all \(t \in [{t_{{k^{\ast}}}},\infty)\).

With the event trigger (39), given t k and x(t k ) ≠ 0, t k +1 is the first time instant after t k such that \(\rho (\vert x({t_{k + 1}})\vert) - \vert x({t_{k + 1}}) - x({t_k})\vert = 0\). Since \(\rho (\vert x({t_k})\vert) - \vert x({t_k}) - x({t_k})\vert = \rho (\vert x({t_k})\vert) > 0\) for any x(t k ) ≠ 0 and x(t) is continuous on [0, ∞), we have ρ(|x(t)|) − |x(t) − x(t k )| > 0 for all t ∈ [t k , tk+1), kS. By using the definition of ω(t) in (33), we have
$$\left\vert {w(t)} \right\vert \leq \rho (\left\vert {x(t)} \right\vert)$$
for all \(t \in {\cup _{k \in s}}[{t_k},{t_{k + 1}})\).
Theorem 6 presents a condition on the ISS gain γ to find a ρ for the event trigger (39) such that infinitely fast sampling is avoided by \(in{f_{k \in s}}\{{t_{k + 1}} - {t_k}\} > 0\), and moreover, x(t) is defined for all t ∈ [0, ∞) and asymptotically converges to the origin.
  • Theorem 6. Consider the event-triggered control system (36) with locally Lipschitz \({\bar f}\) satisfying \(\bar f(0,0)\) and ω defined in (33). If Assumption 5 is satisfied with a locally Lipschitz γ, then one can find a \(\rho \in {{\cal K}_\infty}\) such that ρ satisfies (38) and ρ−1 is locally Lipschitz.

Moreover, with the sampling time instants triggered by (39), it can always be guaranteed that
$$\mathop {\inf}\limits_{k \in S} \{{t_{k + 1}} - {t_k}\} > 0$$
and, for any specific initial state x(0), the system state x(t) satisfies
$$\left\vert {x(t)} \right\vert \leq \breve \beta (\left\vert {x(0)} \right\vert ,t)$$
with \(\beta \in {\cal K}{\cal L}\), for all t ≥ 0.

The original proof of Theorem 6 can be found in [82].

4.2 Decentralized event-based control

In this section, we consider the decentralized event-triggered control for a large-scale system composed of N subsystems, of which the i-th subsystem (i = 1, ⋯, N) takes the following form
$${\dot x_i} = {f_i}(x,{e_i})$$
where \({x_i} \in {{\bf{R}}^{{n_i}}}\) is the state of each isolated i-th system, \(x = {[x_1^{\rm{T}}, \cdots ,x_N^{\rm{T}}]^T} \in {{\bf{R}}^n},{e_i} \in {{\bf{R}}^{{n_i}}}\) represents the sampling error of \({x_i},{f_i}:{{\bf{R}}^n} \times {{\bf{R}}^{{n_i}}} \rightarrow {{\bf{R}}^{{n_i}}}\) satisfying f i (0, 0) = 0 represents the system dynamics.
For the i-th subsystem and for a sequence of event-triggering time instants \({\{t_k^i\} _{k \in {s_i}}}\) with S i = {0, 1, 2, ⋯} ⊆ Z+ and \(t_0^i = 0\), define sampling error e i as
$${e_i}(t) = {x_i}(t_k^i) - {x_i}(t),\quad t \in [t_k^i,t_{k + 1}^i),\;\;k \in {S_i}.$$
Here, the sequence \({\{t_k^i\} _{k \in {s_i}}}\) of the i-th subsystem is supposed to be triggered by comparing the sampling error \(\vert {x_i}(t_k^i) - {x_i}(t)\vert\) with a continuous, positive threshold signal μ i : R+R+. Specifically, the event-triggering time instants are generated by
$$t_{k + 1}^i = \mathop {\inf}\limits_{t > t_k^i} \left\{{\vert {x_i}(t_k^i) - {x_i}(t)\vert = {\mu _i}(t)} \right\},\quad k \in {S_i}.$$
If S i Z+, then S i = {0, 1, ⋯, k*}. In this case, for convenience of notations, we define \(t_{{k^{\ast}} + 1}^i = \infty\). Like in the subsection 4.1, the combination of (43), (44) and (45) implies that the closed-loop, decentralized event-triggered system is a hybrid system. Suppose that x(t) is right maximally defined on [0, Tmax). Then, it holds that
$${T_{\max}} \geq \mathop {\sup}\limits_{k \in {S_i}} \{t_k^i\}$$
for i =1,⋯, N. It is worth noting that the sequence \({\{t_k^i\} _{k \in {s_i}}}\) defined by (45) may be aperiodic so it differs fundamentally from traditional sampled-data control based on periodic sampling. This fact makes the stability analysis more challenging, at the price of saved communication and computation.
Recall the definition of e i (t) in (44). The event trigger defined by (45) guarantees that
$$\left\vert {{e_i}(t)} \right\vert \leq {\mu _i}(t)$$
for all t ∈ [0, Tmax).
For practical implementation of the event-triggered control law, infinitely fast sampling should be avoided. We aim to develop a new approach to decentralized event-triggered control such that the following two objectives are achievable at the same time.
  • Objective 1. Infinitely fast sampling is avoided, i.e., for any specific x(0) and any specific μ i (0) > 0,i = 1, ⋯, N, the intervals \(t_{k + 1}^i - t_k^i\) between the event-triggering time instants for each x i -subsystem (i =1, ⋯, N) are lower bounded by a positive constant.

  • Objective 2. The closed-loop event-triggered system is forward complete, i.e., x(t) is defined for all t ≥ 0 and all initial condition x(0). In addition, x(t) globally asymptotically converges to the origin.

    In this paper, we focus on the event trigger design, and assume, without loss of generality[21], that local feedback control laws have been designed such that each x i -subsystem is input-to-state stable with the inputs e i and x j for ji.

  • Assumption 6. For i =1, ⋯,N, each x i -subsystem is ISS with an ISS-Lyapunov function \({V_i}:{{\bf{R}}^{{n_i}}} \rightarrow {{\bf{R}}_ +}\), which is locally Lipschitz on \({{\bf{R}}^{{n_i}}}\backslash \{0\}\) and satisfies
    $${_i}(\left\vert {{x_i}} \right\vert) \leq {V_i}({x_i}) \leq {\bar \alpha _i}(\left\vert {{x_i}} \right\vert)$$
    $$\matrix{{{V_i}({x_i}) \geq \mathop {\max}\limits_{j \neq i} \{\chi _i^j({V_j}({x_j})),{\gamma _i}(\left\vert {{e_i}} \right\vert)\} \Rightarrow} \hfill \cr{\nabla {V_i}({x_i}){f_i}(x,{e_i}) \leq - {\alpha _i}({V_i}({x_i}))\quad {\rm{a}}{\rm{.e}}.} \hfill \cr}$$
    where \({\underline \alpha _i},{{\bar \alpha}_i} \in {K_\infty},\chi _i^j,{\gamma _i} \in K \cup \{0\}\), and α i is continuous and positive definite.

With Assumption 6 satisfied, the large-scale system (43) is ISS with e i for i = 1, ⋯, N as the inputs, if the interconnection gains \(\chi _i^j\) satisfy the cyclic-small-gain condition. If, additionally, the event triggers are designed such that Objective 1 is achieved and each e i (t) asymptotically converges to the origin, then x(t) globally asymptotically converges to the origin. In this section, we propose a new class of decentralized event triggers for the large-scale nonlinear system by using ISS small-gain arguments.

In this paper, each threshold signal β i is generated by a dynamic system of the form
$${\dot \eta _i}(t) = - {\phi _i}({\eta _i}(t))$$
$${\mu _i}(t) = {\varphi _i}({\eta _i}(t))$$
where η i R+ is the state, φ i : R+R+ is locally Lipschitz and positive definite, and ϕ i : R+R+ is continuously differentiable on (0, ∞) and of class \({{\cal K}_\infty}\). The initial state η i (0) is chosen to be positive. So, μ i (0) is positive.

The design of the event triggers depends on the dynamic behavior of the closed-loop event-triggered system. Based on the estimation of the convergence rate of the closed-loop event-triggered system, the functions φ i and ϕ i can be found for the decentralized event triggers to achieve Objectives 1 and 2.

Lemma 1 provides an estimate on the convergence rate of the closed-loop event-triggered system.
  • Lemma 1. Under Assumption 6, suppose that the large-scale system composed of (43) satisfies the cyclic-small-gain condition.
    1. 1)
      For each i = 1, ⋯, N, there exists \({\sigma _i} \in {{\cal K}_\infty}\) being locally Lipschitz on (0, ∞) such that
      $${\bar V_i}({x_i}) = {\sigma _i}({V_i}({x_i}))$$
      is an ISS-Lyapunov function of the x i -subsystem, that satisfies
      $$\matrix{{{{\bar V}_i}({x_i}) \geq \mathop {\max}\limits_{j \neq i} \left\{{\bar \chi _i^j({{\bar V}_j}({x_j})),{{\bar \gamma}_i}(\left\vert {{e_i}} \right\vert)} \right\} \Rightarrow} \hfill \cr{\nabla {{\bar V}_i}({x_i}){f_i}(x,{e_i}) \leq - \alpha _i^{\prime}({{\bar V}_i}({x_i}))\quad {\rm{a}}{\rm{.e}}.} \hfill \cr}$$
      where \(\bar \chi _i^j \in {\cal K} \cup \{0\}\) satisfies \(\bar \chi _i^j < {\rm{Id,}}{{\bar \gamma}_i} = {\sigma _i} \circ {\gamma _i}\), and αi is continuous and positive definite.
    2. 2)
      Consider the large-scale system composed of (43), (50) and (51). Suppose that (47) holds for t ∈ [0, Tmax) for i = 1, ⋯,N. By choosing ϕ i such that \({{\bar \gamma}_i} \circ {\varphi _i} < {\rm{Id}}\) for i =1, ⋯, N, the function
      $$V(x,\eta) = \mathop {\max}\limits_{i = 1, \cdots ,N} \{{\bar V_i}({x_i}),{\eta _i}\}$$
      $${D^ +}V(x(t),\eta (t)) \leq - \alpha (V(x(t),\eta (t)))$$
      for all t ∈ [0, Tmax), where
      $$\alpha (s) = \mathop {\min}\limits_{i = 1, \cdots ,N} \{\alpha _i^{\prime}(s),{\phi _i}(s)\}$$
      for sR+.

      Based on the estimation of the convergence rate of the closed-loop event-triggered system, we summarize our main result of decentralized event trigger design in Theorem 7.

  • Theorem 7. Consider the interconnected system composed of (43), (50) and (51) subject to (45), with Assumption 6 satisfied. The two objectives of decentralized event-triggered control are achievable if there exists a \(\bar \gamma \in {{\cal K}_\infty}\) such that \(\bar \gamma \geq {\max _{i = 1, \cdots ,N}}\{{{\bar \gamma}_i}\}\) and \(\underline \alpha _i^{- 1} \circ \sigma _i^{- 1} \circ \bar \gamma\) is locally Lipschitz for i =1, ⋯, N.

Please see [83] for the original proofs of Lemma 1 and Theorem 7.

5 Conclusions and future work

This paper has presented a review of recent results in the field of constructive nonlinear control design, with a focus on the nonlinear small-gain tools in solving the problems of quantized feedback stabilization and event-triggered control for nonlinear systems.

In view of the convergence of control, computing and communications, there are quite a few fundamental and basic research problems that are open and await new solutions and tools. The following listed subjects are closely related to the preliminary results presented in this review article.
  1. 1)

    Event-triggered control of nonlinear systems with quantized and/or delayed measurements. In networked control systems, data-sampling and quantization usually co-exist. In the quantized control results, we use ISS gains to represent the influence of quantization error, while for event-based control, we employ an ISS gain to represent the influence of data-sampling. This creates an opportunity to develop a unified framework for event-triggered and quantized control of nonlinear systems. Time-delays also arise from networked control systems. Note that [78] has studied event-triggered control for linear systems with quantization and delays, also see our recent preliminary work[84, 85]. Based on the recent theoretical achievements for nonlinear systems with time-delays[86], it is of interest to study the event-triggered control problem for nonlinear systems by taking into account the effects of time-delays.

  2. 2)

    Distributed event-triggered control. The idea of small-gain design also bridges event-triggered control and our recent distributed control results. In [87], it is shown that a distributed control problem for nonlinear uncertain systems can ultimately be transformed into a robust stability problem of a network of ISS subsystems. By integrating the idea in this paper, distributed control could be realized through event-triggered information exchange. Note that such ideas have been implemented for linear systems[76, 88, 89, 90].

  3. 3)

    Data-driven nonlinear control. An emerging topic under current investigation is to develop a new class of data-driven controllers for robust optimal control of nonlinear uncertain systems, leveraging techniques from reinforcement learning and adaptive dynamic programming. Some prior results are presented in [91, 92, 93, 94] and references therein.



  1. [1]
    A. H. Levis, S. I. Marcus, W. R. Perkins, P. Kokotovic, M. Athans, R. W. Brockett, A. S. Willsky. Challenges to control: A collective view. IEEE Transactions on Automatic Control, vol. 32, no. 4, pp. 275–285, 1987.CrossRefGoogle Scholar
  2. [2]
    L. Guo, D. Z. Cheng, D. X. Feng. Introduction to Control Theory, Beijing, China: Science Press, 2005. (in Chinese)Google Scholar
  3. [3]
    Y. G. Hong, X. L. Wang, Z. P. Jiang. Distributed output regulation of leader-follower multi-agent systems. International Journal of Robust and Nonlinear Control, vol. 23, no. 1, pp. 48–66, 2013.MathSciNetCrossRefzbMATHGoogle Scholar
  4. [4]
    A. Isidori. Nonlinear Control Systems, 3rd ed., London, UK: Springer, 1995.CrossRefzbMATHGoogle Scholar
  5. [5]
    A. Isidori. Nonlinear Control Systems, 3rd ed., London, UK: Springer, 1999.CrossRefzbMATHGoogle Scholar
  6. [6]
    H. K. Khalil. Nonlinear Systems, 3rd ed., NJ, USA: Prentice-Hall, 2002.zbMATHGoogle Scholar
  7. [7]
    E. D. Sontag. Input to state stability: Basic concepts and results. Lectures given at the C.I.M.E. Summer School, Nonlinear and Optimal Control Theory, Springer-Verlag, Cetraro, Italy, vol. 1932, pp. 163–220, 2008.MathSciNetzbMATHGoogle Scholar
  8. [8]
    E. D. Sontag, A. Teel. Changing supply functions in input/state stable systems. IEEE Transactions on Automatic Control, vol. 40, no. 8, pp. 1476–1478, 1995.MathSciNetCrossRefzbMATHGoogle Scholar
  9. [9]
    Z. T. Ding. Global stabilization and disturbance suppression of a class of nonlinear systems with uncertain internal model. Automatica, vol. 39, no. 3, pp. 471–479, 2003.MathSciNetCrossRefzbMATHGoogle Scholar
  10. [10]
    B. A. Francis, W. M. Wonham. The internal model principle of control theory. Automatica, vol. 12, no. 5, pp. 457–465, 1976.MathSciNetCrossRefzbMATHGoogle Scholar
  11. [11]
    J. Huang. Nonlinear Output Regulation: Theory and Applications, Advances in Design and Control, Philadelphia, USA: SIAM, 2004.CrossRefGoogle Scholar
  12. [12]
    A. Isidori, L. Marconi, A. Serrani. Robust Autonomous Guidance: An Internal Model Approach, Berlin, Germany: Springer, 2003.CrossRefzbMATHGoogle Scholar
  13. [13]
    T. J. Tarn, P. Sanposh, D. Z. Cheng, M. J. Zhang. Output regulation for nonlinear systems: Some recent theoretical and experimental results. IEEE Transactions on Control Systems Technology, vol. 13, no. 4, pp. 605–610, 2005.CrossRefGoogle Scholar
  14. [14]
    X. L. Wang, Y. G. Hong, J. Huang, Z. P. Jiang. A distributed control approach to a robust output regulation problem for multi-agent linear systems. IEEE Transactions on Automatic Control, vol. 55, no. 12, pp. 2891–2895, 2010.MathSciNetCrossRefGoogle Scholar
  15. [15]
    X. D. Ye, J. Huang. Decentralized adaptive output regulation for a class of large-scale nonlinear systems. IEEE Transactions on Automatic Control, vol. 48, no. 2, pp. 276–281, 2003.MathSciNetCrossRefGoogle Scholar
  16. [16]
    L. Marconi, L. Praly, A. Isidori. Output stabilization via nonlinear Luenberger observers. SIAM Journal on Control and Optimization, vol. 45, no. 6, pp. 2277–2298, 2007.MathSciNetCrossRefzbMATHGoogle Scholar
  17. [17]
    L. Praly, Z. P. Jiang. Stabilization by output feedback for systems with ISS inverse dynamics. Systems and Control Letters, vol. 21, no. 1, pp. 19–33, 1993.MathSciNetCrossRefzbMATHGoogle Scholar
  18. [18]
    A. Pavlov, N. van de Wouw, H. Nijmeijer. Uniform Output Regulation of Nonlinear Systems: A Convergent Dynamics Approach, Boston, USA: Birkhauser, 2005.zbMATHGoogle Scholar
  19. [19]
    E. D. Sontag. Mathematical Control Theory: Deterministic Finite Dimensional Systems, 2nd ed., New York, USA: Springer-Verlag, 1998.CrossRefzbMATHGoogle Scholar
  20. [20]
    Z. Artstein. Stabilization with relaxed controls. Nonlinear Analysis: Theory, Methods & Applications, vol. 7, no. 11, pp. 1163–1173, 1983.MathSciNetCrossRefzbMATHGoogle Scholar
  21. [21]
    M. Krstić, I. Kanellakopoulos, P. V. Kokotović. Nonlinear and Adaptive Control Design, New York, USA: Wiley, 1995.zbMATHGoogle Scholar
  22. [22]
    L. Praly, G. Bastin, J. B. Pomet, Z. P. Jiang. Adaptive stabilization of nonlinear systems. Foundations of Adaptive Control, Lecture Notes in Control and Informations Sciences, P. V. Kokotovi Ed., Berlin, Germany: Springer-Verlag, pp. 347–434, 1991.Google Scholar
  23. [23]
    Z. P. Jiang, L. Praly. Preliminary results about robust lagrange stability in adaptive non-linear regulation. International Journal of Adaptive Control and Signal Processing, vol. 6, no. 4, pp. 285–307, 1992.CrossRefzbMATHGoogle Scholar
  24. [24]
    R. Freeman, P. V. Kokotović. Robust Nonlinear Control Design, Boston, USA: Birkhauser, 1996.CrossRefzbMATHGoogle Scholar
  25. [25]
    J. A. Primbs, V. Nevistić, J. C. Doyle. Nonlinear optimal control: A control Lyapunov function and receding horizon perspective. Asian Journal of Control, vol. 1, no. 1, pp. 14–24, 1999.CrossRefGoogle Scholar
  26. [26]
    J. Jankovic. Control Lyapunov-Razumikhin functions and robust stabilization of time delay systems. IEEE Transactions on Automatic Control, vol. 46, no. 7, pp. 1048–1060, 2001.MathSciNetCrossRefzbMATHGoogle Scholar
  27. [27]
    I. Karafyllis, Z. P. Jiang. Necessary and sufficient Lyapunov-like conditions for robust nonlinear stabilization. ESAIM: Control, Optimization and Calculus of Variations, vol. 16, no. 4, pp. 887–928, 2010.MathSciNetCrossRefzbMATHGoogle Scholar
  28. [28]
    P. Ögren, M. Egerstedt, X. M. Hu. A control Lyapunov function approach to multiagent coordination. IEEE Transactions on Robotics Automation, vol. 18, no. 5, pp. 847–851, 2002.CrossRefGoogle Scholar
  29. [29]
    P. V. Kokotović, M. Arcak. Constructive nonlinear control: A historical perspective. Automatica, vol. 37, no. 5, pp. 637–662, 2001.MathSciNetCrossRefzbMATHGoogle Scholar
  30. [30]
    J. Tsinias. Sufficient Lyapunov-like conditions for stabilization. Mathematics of Control, Signals and Systems, vol. 2, no. 4, pp. 343–357, 1989.MathSciNetCrossRefzbMATHGoogle Scholar
  31. [31]
    P. V. Kokotović. The joy of feedback: Nonlinear and adaptive. IEEE Control Systems, vol. 12, no. 3, pp. 7–17, 1992.CrossRefGoogle Scholar
  32. [32]
    J. M. Coron, L. Praly. Adding an integrator for the stabilization problem. Systems & Control Letters, vol. 17, no. 2, pp. 89–104, 1991.MathSciNetCrossRefzbMATHGoogle Scholar
  33. [33]
    A. Iggidr, G. Sallet. Nonlinear stabilization by adding an integrator. Kybernetika, vol. 30, no. 5, pp. 499–506, 1994.MathSciNetzbMATHGoogle Scholar
  34. [34]
    R. Outbib, H. Jghima. Comments on the stabilization of nonlinear systems by adding an integrator. IEEE Transactions on Automatic Control, vol. 41, no. 12, pp. 1804–1807, 1996.MathSciNetCrossRefzbMATHGoogle Scholar
  35. [35]
    R. Marino, P. Tomei. Nonlinear Control Design: Geometric, Adaptive and Robust, London, UK: Prentice-Hall, 1995.zbMATHGoogle Scholar
  36. [36]
    Z. P. Jiang, L. Praly. Design of robust adaptive controllers for nonlinear systems with dynamic uncertainties. Automatica, vol. 34, no. 7, pp. 825–840, 1998.MathSciNetCrossRefzbMATHGoogle Scholar
  37. [37]
    R. Sepulchre, M. Janković, P. V. Kokotović. Constructive Nonlinear Control, Berlin, Germany: Springer, 1997.CrossRefzbMATHGoogle Scholar
  38. [38]
    R. Ortega, A. van der Schaft, F. Castanos, A. Astolfi. Control by interconnection and standard passivity-based control of port-Hamiltonian systems. IEEE Transactions on Automatic Control, vol. 53, no. 11, pp. 2527–2542, 2008.MathSciNetCrossRefGoogle Scholar
  39. [39]
    R. Ortega, L. P. Borja. New results on control by interconnection and energy-balancing passivity-based control of port-Hamiltonian systems. In Proceedings of the 53rd IEEE Conference on Decision and Control, IEEE, Los Angeles, USA, pp. 2346–2351, 2014.CrossRefGoogle Scholar
  40. [40]
    H. G. Tanner, K. J. Kyriakopoulos. Backstepping for nonsmooth systems. Automatica, vol. 39, no. 7, pp. 1259–1265, 2003.MathSciNetCrossRefzbMATHGoogle Scholar
  41. [41]
    J. Zhou, C. Y. Wen. Adaptive Backstepping Control of Uncertain Systems: Nonsmooth Nonlinearities, Interactions or Time-variations, London, UK: Springer, 2008.zbMATHGoogle Scholar
  42. [42]
    Y. G. Hong, Z. P. Jiang, G. Feng. Finite-time input-tostate stability and applications to finite-time control design. SIAM Journal on Control and Optimization, vol. 48, no. 7, pp. 4395–4418, 2010.MathSciNetCrossRefzbMATHGoogle Scholar
  43. [43]
    Z. P. Jiang, A. R. Teel, L. Praly. Small-gain theorem for ISS systems and applications. Mathematics of Control, Signals and Systems, vol. 7, no. 2, pp. 95–120, 1994.MathSciNetCrossRefzbMATHGoogle Scholar
  44. [44]
    Z. P. Jiang, I. Mareels. A small gain control method for nonlinear cascaded systems with dynamic uncertainties. IEEE Transactions on Automatic Control, vol. 42, no. 3, pp. 292–308, 1997.MathSciNetCrossRefzbMATHGoogle Scholar
  45. [45]
    E. D. Sontag. Smooth stabilization implies coprime factorization. IEEE Transactions on Automatic Control, vol. 34, no. 4, pp. 435–443, 1989.MathSciNetCrossRefzbMATHGoogle Scholar
  46. [46]
    I. Karafyllis, Z. P. Jiang. Stability and Stabilization of Nonlinear Systems, London, UK, Springer, 2011.CrossRefzbMATHGoogle Scholar
  47. [47]
    Z. P. Jiang, I. Mareels, D. J. Hill, J. Huang. A unifying framework for global regulation via nonlinear output feedback: From ISS to iISS. IEEE Transactions on Automatic Control, vol. 49, no. 4, pp. 549–562, 2004.MathSciNetCrossRefGoogle Scholar
  48. [48]
    T. F. Liu, Z. P. Jiang, D. J. Hill. A sector bound approach to feedback control of nonlinear systems with state quantization. Automatica, vol. 48, no. 1, pp. 145–152, 2012.MathSciNetCrossRefzbMATHGoogle Scholar
  49. [49]
    D. Liberzon. Hybrid feedback stabilization of systems with quantized signals. Automatica, vol. 39, no. 9, pp. 1543–1554, 2003.MathSciNetCrossRefzbMATHGoogle Scholar
  50. [50]
    T. Liu, Z. P. Jiang, D. J. Hill. Small-gain based outputfeedback controller design for a class of nonlinear systems with actuator dynamic quantization. IEEE Transactions on Automatic Control, vol. 57, no. 5, pp. 1326–1332, 2012.MathSciNetCrossRefGoogle Scholar
  51. [51]
    T. F. Liu, Z. P. Jiang, D. J. Hill. Quantized stabilization of strict-feedback nonlinear systems based on ISS cyclicsmall-gain theorem. Mathematics of Control, Signals, and Systems, vol. 24, no. 1–2, pp. 75–110, 2012.MathSciNetCrossRefzbMATHGoogle Scholar
  52. [52]
    T. F. Liu, Z. P. Jiang, D. J. Hill. Nonlinear Control of Dynamic Networks, Boca Raton, FL, USA: CRC Press, 2014.CrossRefzbMATHGoogle Scholar
  53. [53]
    R. K. Miller, M. S. Mousa, A. N. Michel. Quantization and overflow effects in digital implementation of linear dynamic controllers. IEEE Transactions on Automatic Control, vol. 33, no. 7, pp. 698–704, 1988.MathSciNetCrossRefzbMATHGoogle Scholar
  54. [54]
    D. F. Delchamps. Stabilizing a linear system with quantized state feedback. IEEE Transactions on Automatic Control, vol. 35, no. 8, pp. 916–924, 1990.MathSciNetCrossRefzbMATHGoogle Scholar
  55. [55]
    R. W. Brockett, D. Liberzon. Quantized feedback stabilization of linear systems. IEEE Transactions on Automatic Control, vol. 45, no. 7, pp. 1279–1289, 2000.MathSciNetCrossRefzbMATHGoogle Scholar
  56. [56]
    M. Fu, L. Xie. The sector bound approach to quantized feedback control. IEEE Transactions on Automatic Control, vol. 50, no. 11, pp. 1698–1711, 2005.MathSciNetCrossRefGoogle Scholar
  57. [57]
    N. Elia, S. K. Mitter. Stabilization of linear systems with limited information. IEEE Transactions on Automatic Control, vol. 46, no. 9, pp. 1384–1400, 2001.MathSciNetCrossRefzbMATHGoogle Scholar
  58. [58]
    F. Ceragioli, C. De Persis. Discontinuous stabilization of nonlinear systems: Quantized and switching controls. Systems & Control Letters, vol. 56, no. 7–8, pp. 461–473, 2007.MathSciNetCrossRefzbMATHGoogle Scholar
  59. [59]
    C. De Persis. Robust stabilization of nonlinear systems by quantized and ternary control. Systems & Control Letters, vol. 58, no. 8, pp. 602–608, 2009.MathSciNetCrossRefzbMATHGoogle Scholar
  60. [60]
    D. Liberzon, J. P. Hespanha. Stabilization of nonlinear systems with limited information feedback. IEEE Transactions on Automatic Control, vol. 50, no. 6, pp. 910–915, 2005.MathSciNetCrossRefGoogle Scholar
  61. [61]
    D. Liberzon, D. Nesić. Input-to-state stabilization of linear systems with quantized state measurements. IEEE Transactions on Automatic Control, vol. 52, no. 5, pp. 767–781, 2007.MathSciNetCrossRefGoogle Scholar
  62. [62]
    D. Liberzon. Observer-based quantized output feedback control of nonlinear systems. In Proceedings of the 17th IFAC World Congress, COEX, South Korea, vol. 17, pp. 8039–8043, 2008.Google Scholar
  63. [63]
    T. F. Liu, Z. P. Jiang. Quantized feedback stabilization of nonlinear cascaded systems with dynamic uncertainties. Science China: Information Sciences, to be published.Google Scholar
  64. [64]
    K. E. Årzén. A simple event-based PID controller. In Proceedings of the 1999 IFAC World Congress, pp. 423–428, 1999.Google Scholar
  65. [65]
    K. J. Åström, B. M. Bernhardsson. Comparison of Riemann and Lebesgue sampling for first order stochastic systems. In Proceedings of the 41st IEEE Conference on Decision and Control, IEEE, Las Vegas, USA, vol. 2, pp. 2011–2016, 2002.Google Scholar
  66. [66]
    J. K. Yook, D. M. Tilbury, N. R. Soparkar. Trading computation for bandwidth: Reducing communication in distributed control systems using state estimators. IEEE Transactions on Control Systems Technology, vol. 10, no. 4, pp. 503–518, 2002.CrossRefGoogle Scholar
  67. [67]
    P. Tabuada. Event-triggered real-time scheduling of stabilizing control tasks. IEEE Transactions on Automatic Control, vol. 52, no. 9, pp. 1680–1685, 2007.MathSciNetCrossRefGoogle Scholar
  68. [68]
    W. P. M. H. Heemels, J. H. Sandee, P. P. J. van den Bosch. Analysis of event-driven controllers for linear systems. International Journal of Control, vol. 81, no. 4, pp. 571–590, 2008.MathSciNetCrossRefzbMATHGoogle Scholar
  69. [69]
    T. Henningsson, E. Johannesson, A. Cervin. Sporadic event-based control of first-order linear stochastic systems. Automatica, vol. 44, no. 11, pp. 2890–2895, 2008.MathSciNetCrossRefzbMATHGoogle Scholar
  70. [70]
    W. P. M. H. Heemels, M. C. F. Donkers, A. R. Teel. Periodic event-triggered control for linear systems. IEEE Transactions on Automatic Control, vol. 58, no. 4, pp. 847–861, 2013.MathSciNetCrossRefzbMATHGoogle Scholar
  71. [71]
    T. Donkers, M. Heemels. Output-based event-triggered control with guaranteed L8-gain and improved and decentralized event-triggering. IEEE Transactions on Automatic Control, vol. 57, no. 6, pp. 1362–1376, 2012.MathSciNetCrossRefGoogle Scholar
  72. [72]
    P. J. Gawthrop, L. P. Wang. Event-driven intermittent control. International Journal of Control, vol. 82, no. 12, pp. 2235–2248, 2009.MathSciNetCrossRefzbMATHGoogle Scholar
  73. [73]
    J. Lunze, D. Lehmann. A state-feedback approach to eventbased control. Automatica, vol. 46, no. 1, pp. 211–215, 2010.MathSciNetCrossRefzbMATHGoogle Scholar
  74. [74]
    N. Marchand, S. Durand, J. F. G. Castellanos. A general formula for event-based stabilization of nonlinear systems. IEEE Transactions on Automatic Control, vol. 58, no. 5, pp. 1332–1337, 2013.CrossRefGoogle Scholar
  75. [75]
    P. Tallapragada, N. Chopra. On event triggered tracking for nonlinear systems. IEEE Transactions on Automatic Control, vol. 58, no. 9, pp. 2343–2348, 2013.MathSciNetCrossRefGoogle Scholar
  76. [76]
    X. F. Wang, M. D. Lemmon. Event-triggering in distributed networked control systems. IEEE Transactions on Automatic Control, vol. 56, no. 3, pp. 586–601, 2011.MathSciNetCrossRefGoogle Scholar
  77. [77]
    C. De Persis, R. Saile, F. Wirth. Parsimonious eventtriggered distributed control: A Zeno free approach. Automatica, vol. 49, no. 7, pp. 2116–2124, 2013.MathSciNetCrossRefGoogle Scholar
  78. [78]
    E. Garcia, P. J. Antsaklis. Model-based event-triggered control for systems with quantization and time-varying network delays. IEEE Transactions on Automatic Control, vol. 58, no. 2, pp. 422–434, 2013.MathSciNetCrossRefGoogle Scholar
  79. [79]
    W. P. M. H. Heemels, K. H. Johansson, P. Tabuada. An introduction to event-triggered and self-triggered control. In Proceedings of the 51st Annual IEEE Conference on Decision and Control, IEEE, Maui, USA, pp. 3270–3285, 2012.Google Scholar
  80. [80]
    M. D. Lemmon. Event-triggered feedback in control, estimation, and optimization. Networked Control Systems, A. Bemporad, M. Heemels, M. Johansson Eds., Berlin, Germany: Springer-Verlag, vol. 406, pp. 293–358, 2010.MathSciNetCrossRefzbMATHGoogle Scholar
  81. [81]
    R. Goebel, R. G. Sanfelice, A. R. Teel. Hybrid dynamical systems. IEEE Control Systems, vol. 29, no. 2, pp. 28–93, 2009.MathSciNetCrossRefGoogle Scholar
  82. [82]
    T. F. Liu, Z. P. Jiang. A small-gain approach to robust event-triggered control of nonlinear systems. IEEE Transactions on Automatic Control, to be published.Google Scholar
  83. [83]
    T. F. Liu, Z. P. Jiang. Event-based nonlinear control: From centralized to decentralized systems. In Proceedings of 2015 IEEE International Conference on Information and Automation, Lijiang, China, pp. 690–695, 2015.CrossRefGoogle Scholar
  84. [84]
    W. Zhu, Z. P. Jiang, G. Feng. Event-based consensus of multi-agent systems with general linear models. Automatica, vol. 50, no. 2, pp. 552–558, 2014.MathSciNetCrossRefGoogle Scholar
  85. [85]
    W. Zhu, Z. P. Jiang. Event-based leader-following consensus of multi-agent systems with input time delay. IEEE Transactions on Automatic Control, vol. 60, no. 5, pp. 1362–1367, 2015.MathSciNetCrossRefGoogle Scholar
  86. [86]
    I. Karafyllis, Z. P. Jiang. Stability and Stabilization of Nonlinear Systems, London, UK: Springer, 2011.CrossRefzbMATHGoogle Scholar
  87. [87]
    T. F. Liu, Z. P. Jiang. Distributed formation control of nonholonomic mobile robots without global position measurements. Automatica, vol. 49, no. 2, pp. 592–600, 2013.MathSciNetCrossRefzbMATHGoogle Scholar
  88. [88]
    Y. Fan, G. Feng, Y. Wang, C. Song. Distributed eventtriggered control of multi-agent systems with combinational measurements. Automatica, vol. 49, no. 2, pp. 671–675, 2013.MathSciNetCrossRefzbMATHGoogle Scholar
  89. [89]
    G. S. Seyboth, D. V. Dimarogonas, K. H. Johansson. Eventbased broadcasting for multi-agent average consensus. Automatica, vol. 49, no. 1, pp. 245–252, 2013.MathSciNetCrossRefzbMATHGoogle Scholar
  90. [90]
    M. Guinaldo, D. V. Dimarogonas, K. H. Johansson, J. Sánchez, S. Dormido. Distributed event-based control strategies for interconnected linear systems. IET Control Theory and Applications, vol. 7, no. 6, pp. 877–886, 2013.MathSciNetCrossRefGoogle Scholar
  91. [91]
    F. L. Lewis, D. Vrabie, K. G. Vamvoudakis. Reinforcement learning and feedback control: Using natural decision methods to design optimal adaptive controllers. IEEE Control Systems, vol. 32, no. 6, pp. 76–105, 2012.MathSciNetCrossRefGoogle Scholar
  92. [92]
    Z. P. Jiang, Y. Jiang. Robust adaptive dynamic programming for linear and nonlinear systems: An overview. European Journal of Control, vol. 19, no. 5, pp. 417–425, 2013.MathSciNetCrossRefzbMATHGoogle Scholar
  93. [93]
    Y. Jiang, Z. P. Jiang. Robust adaptive dynamic programming and feedback stabilization of nonlinear systems. IEEE Transactions on Neural Networks and Learning Systems, vol. 25, no. 5, pp. 882–893, 2014.CrossRefGoogle Scholar
  94. [94]
    T. Bian, Y. Jiang, Z. P. Jiang. Decentralized adaptive optimal control of large-scale systems with application to power systems. IEEE Transactions on Industrial Electronics, vol. 62, no. 4, pp. 2439–2447, 2015.CrossRefGoogle Scholar

Copyright information

© Institute of Automation, Chinese Academy of Sciences and Springer-Verlag Berlin Heidelberg 2015

Authors and Affiliations

  1. 1.Polytechnic School of EngineeringNew York UniversityBrooklynUSA
  2. 2.State Key Laboratory of Synthetical Automation for Process IndustriesNortheastern UniversityShenyangChina

Personalised recommendations