Skip to main content

Nonsmooth Analysis in Systems and Control Theory

  • Reference work entry
  • 419 Accesses

Article Outline

Glossary

Definition of the Subject

Introduction

Elements of Nonsmooth Analysis

Necessary Conditions in Optimal Control

Verification Functions

Dynamic Programming and Viscosity Solutions

Lyapunov Functions

Stabilizing Feedback

Future Directions

Bibliography

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   600.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
USD   329.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Abbreviations

Generalized gradients and subgradients :

These terms refer to various set-valued replacements for the usual derivative which are used in developing differential calculus for functions which are not differentiable in the classical sense. The subject itself is known as nonsmooth analysis . One of the best-known theories of this type is that of generalized gradients. Another basic construct is the subgradient, of which there are several variants. The approach also features generalized tangent and normal vectors which apply to sets which are not classical manifolds. The article contains a summary of the essential definitions.

Pontryagin Maximum Principle :

The main theorem on necessary conditions in optimal control was developed in the 1950s by the Russian mathematician L. Pontryagin and his associates. The Maximum Principle unifies and extends to the control setting the classical necessary conditions of Euler and Weierstrass from the calculus of variations, as well as the transversality conditions. There have been numerous extensions since then, as the need to consider new types of problems continues to arise.

Verification functions :

In attempting to prove that a certain control is indeed the solution to a given optimal control problem, one important approach hinges upon exhibiting a function having certain properties implying the optimality of the given control. Such a function is termed a verification function. The approach becomes widely applicable if one allows nonsmooth verification functions.

Dynamic programming :

A well-known technique in dynamic problems of optimization is to solve (in a discrete context) a backwards recursion for a certain value function related to the problem. This technique, which was developed notably by Bellman, can be applied in particular to optimal control problems. In the continuous setting, the recursion corresponds to the Hamilton–Jacobi Equation . This partial differential equation does not generally admit smooth classical solutions. The theory of viscosity solutions uses subgradients to define generalized solutions, and obtains their existence and uniqueness.

Lyapunov function :

In the classical theory of ordinary differential equations, global asymptotic stability is most often verified by exhibiting a Lyapunov function, a function along which trajectories decrease. In that setting, the existence of a smooth Lyapunov function is both necessary and sufficient for stability. The Lyapunov function concept can be extended to control systems, but in that case it turns out that nonsmooth functions are essential. These generalized control Lyapunov functions play an important role in designing optimal or stabilizing feedback.

Bibliography

  1. Artstein Z (1983) Stabilization with relaxed controls. Nonlinear Anal TMA 7:1163–1173

    Article  MathSciNet  MATH  Google Scholar 

  2. Astolfi A (1996) Discontinuous control of nonholonomic systems. Syst Control Lett 27:37–45

    Article  MathSciNet  MATH  Google Scholar 

  3. Astolfi A (1998) Discontinuous control of the Brockett integrator. Eur J Control 4:49–63

    MATH  Google Scholar 

  4. Bardi M, Capuzzo-Dolcetta I (1997) Optimal control and viscosity solutions of Hamilton–Jacobi–Bellman equations. Birkhäuser, Boston

    Google Scholar 

  5. Bardi M, Staicu V (1993) The Bellman equation for time-optimal control of noncontrollable nonlinear systems. Acta Appl Math 31:201–223

    Article  MathSciNet  MATH  Google Scholar 

  6. Berkovitz LD (1989) Optimal feedback controls. SIAM J Control Optim 27:991–1006

    Article  MathSciNet  MATH  Google Scholar 

  7. Borwein JM, Zhu QJ (1999) A survey of subdifferential calculus with applications. Nonlinear Anal 38:687–773

    Article  MathSciNet  MATH  Google Scholar 

  8. Caines PE, Clarke FH, Liu X, Vinter RB (2006) A maximum principle for hybrid optimal control problems with pathwise state constraints. Proceedings of the 45th IEEE Conference on Decision and Control, San Diego, 13–15 Dec 2006

    Google Scholar 

  9. Cannarsa P, Sinestrari C (2004) Semiconcave Functions, Hamilton–Jacobi Equations, and Optimal Control. Birkhäuser, Boston

    MATH  Google Scholar 

  10. Clarke F (2005) The maximum principle in optimal control. J Cybern Control 34:709–722

    Google Scholar 

  11. Clarke F (2005) Necessary Conditions in Dynamic Optimization. Mem Amer Math Soc 173(816)

    Google Scholar 

  12. Clarke FH (1973) Necessary Conditions for Nonsmooth Problems in Optimal Control and the Calculus of Variations. Doctoral thesis, University of Washington

    Google Scholar 

  13. Clarke FH (1976) The maximum principle under minimal hypotheses. SIAM J Control Optim 14:1078–1091

    Article  MATH  Google Scholar 

  14. Clarke FH (1983) Optimization and Nonsmooth Analysis. Wiley-Interscience, New York. Republished as: Classics in Applied Mathematics, vol 5. SIAM, 1990

    MATH  Google Scholar 

  15. Clarke FH (1986) Perturbed optimal control problems. IEEE Trans Aut Control 31:535–542

    Article  MATH  Google Scholar 

  16. Clarke FH (1989) Methods of Dynamic and Nonsmooth Optimization. Regional Conference Series in Applied Mathematics, vol 57. SIAM, Philadelphia

    Book  Google Scholar 

  17. Clarke FH (2001) Nonsmooth analysis in control theory: a survey. Eur J Control 7:63–78

    Article  Google Scholar 

  18. Clarke FH (2004) Lyapunov functions and feedback in nonlinear control. In: de Queiroz MS, Malisoff M, Wolenski P (eds) Optimal Control, Stabilization and Nonsmooth Analysis. Lecture Notes in Control and Information Sciences, vol 301. Springer, New York, pp 267–282

    Chapter  Google Scholar 

  19. Clarke FH, Ledyaev YS (1994) Mean value inequalities in Hilbert space. Trans Amer Math Soc 344:307–324

    Article  MathSciNet  MATH  Google Scholar 

  20. Clarke FH, Ledyaev YS, Rifford L, Stern RJ (2000) Feedback stabilization and Lyapunov functions. SIAM J Control Optim 39:25–48

    Article  MathSciNet  MATH  Google Scholar 

  21. Clarke FH, Ledyaev YS, Sontag ED, Subbotin AI (1997) Asymptotic controllability implies feedback stabilization. IEEE Trans Aut Control 42:1394–1407

    Article  MathSciNet  MATH  Google Scholar 

  22. Clarke FH, Ledyaev YS, Stern RJ (1998) Asymptotic stability and smooth Lyapunov functions. J Differ Equ 149:69–114

    Article  MathSciNet  MATH  Google Scholar 

  23. Clarke FH, Ledyaev YS, Stern RJ, Wolenski PR (1995) Qualitative properties of trajectories of control systems: a survey. J Dyn Control Syst 1:1–48

    Google Scholar 

  24. Clarke FH, Ledyaev YS, Stern RJ, Wolenski PR (1998) Nonsmooth Analysis and Control Theory. Graduate Texts in Mathematics, vol 178. Springer, New York

    Google Scholar 

  25. Clarke FH, Ledyaev YS, Subbotin AI (1997) The synthesis of universal pursuit strategies in differential games. SIAM J Control Optim 35:552–561

    Article  MathSciNet  MATH  Google Scholar 

  26. Clarke FH, Nour C (2005) Nonconvex duality in optimal control. SIAM J Control Optim 43:2036–2048

    Article  MathSciNet  MATH  Google Scholar 

  27. Clarke FH, Rifford L, Stern RJ (2002) Feedback in state constrained optimal control. ESAIM Control Optim Calc Var 7:97–133

    Article  MathSciNet  MATH  Google Scholar 

  28. Clarke FH, Stern RJ (2005) Hamilton–Jacobi characterization of the state-constrained value. Nonlinear Anal 61:725–734

    Article  MathSciNet  MATH  Google Scholar 

  29. Clarke FH, Stern RJ (2005) Lyapunov and feedback characterizations of state constrained controllability and stabilization. Syst Control Lett 54:747–752

    Article  MathSciNet  MATH  Google Scholar 

  30. Clarke FH, Vinter RB (1984) On the conditions under which the Euler equation or the maximum principle hold. Appl Math Optim 12:73–79

    Article  MathSciNet  MATH  Google Scholar 

  31. Clarke FH, Vinter RB (1989) Applications of optimal multiprocesses. SIAM J Control Optim 27:1048–1071

    Article  MathSciNet  MATH  Google Scholar 

  32. Coron J-M, Rosier L (1994) A relation between continuous time-varying and discontinuous feedback stabilization. J Math Syst Estim Control 4:67–84

    MathSciNet  MATH  Google Scholar 

  33. de Pinho MR (2003) Mixed constrained control problems. J Math Anal Appl 278:293–307

    Article  MathSciNet  MATH  Google Scholar 

  34. Dmitruk AV (1993) Maximum principle for a general optimal control problem with state and regular mixed constraints. Comp Math Model 4:364–377

    Article  MathSciNet  Google Scholar 

  35. Ferreira MMA (2006) On the regularity of optimal controls for a class of problems with state constraints. Int J Syst Sci 37:495–502

    Article  MATH  Google Scholar 

  36. Fontes FACC (2001) A general framework to design stabilizing nonlinear model predictive controllers. Syst Control Lett 42:127–143

    Article  MathSciNet  MATH  Google Scholar 

  37. Fontes FACC, Magni L (2003) Min-max model predictive control of nonlinear systems using discontinuous feedbacks. New directions on nonlinear control. IEEE Trans Autom Control 48:1750–1755

    Article  MathSciNet  Google Scholar 

  38. Hamzi B, Praly L (2001) Ignored input dynamics and a new characterization of control Lyapunov functions. Autom J IFAC 37:831–841

    Article  MathSciNet  MATH  Google Scholar 

  39. Ioffe AD, Tikhomirov V (1974) Theory of Extremal Problems. Nauka, Moscow. English translation: North-Holland, Amsterdam, 1979

    Google Scholar 

  40. Kellett CM, Teel AR (2005) On the robustness of \({\mathcal{K} \mathcal{L}}\)-stability for difference inclusions: smooth discrete-time Lyapunov functions. SIAM J Control Optim 44:777–800

    Article  MathSciNet  MATH  Google Scholar 

  41. Krasovskii NN, Subbotin AI (1988) Game-Theoretical Control Problems. Springer, New York

    Book  MATH  Google Scholar 

  42. Ledyaev YS, Sontag ED (1999) A Lyapunov characterization of robust stabilization. Nonlinear Anal 37:813–840

    Article  MathSciNet  MATH  Google Scholar 

  43. Milyutin AA, Osmolovskii NP (1998) Calculus of Variations and Optimal Control. Amer Math Soc, Providence

    MATH  Google Scholar 

  44. Neustadt LW (1976) Optimization. Princeton University Press, Princeton

    MATH  Google Scholar 

  45. Nobakhtian S, Stern RJ (2000) Universal near-optimal feedbacks. J Optim Theory Appl 107:89–122

    Article  MathSciNet  MATH  Google Scholar 

  46. Páles Z, Zeidan V (2003) Optimal control problems with set-valued control and state constraints. SIAM J Optim 14:334–358

    Google Scholar 

  47. Pontryagin LS, Boltyanskii RV, Gamkrelidze RV, Mischenko EF (1962) The Mathematical Theory of Optimal Processes. Wiley-Interscience, New York

    MATH  Google Scholar 

  48. Prieur C, Trélat E (2005) Robust optimal stabilization of the Brockett integrator via a hybrid feedback. Math Control Signal Syst 17:201–216

    Google Scholar 

  49. Rifford L (2000) Existence of Lipschitz and semiconcave control-Lyapunov functions. SIAM J Control Optim 39:1043–1064

    Article  MathSciNet  MATH  Google Scholar 

  50. Rifford L (2001) On the existence of nonsmooth control-Lyapunov functions in the sense of generalized gradients. ESAIM Control Optim Calc Var 6:539–611

    Article  MathSciNet  Google Scholar 

  51. Rifford L (2002) Semiconcave control-Lyapunov functions and stabilizing feedbacks. SIAM J Control Optim 41:659–681

    Article  MathSciNet  MATH  Google Scholar 

  52. Rifford L (2003) Singularities of viscosity solutions and the stabilization problem in the plane. Indiana Univ Math J 52:1373–1396

    Article  MathSciNet  MATH  Google Scholar 

  53. Rockafellar RT, Wets R (1998) Variational Analysis. Springer, New York

    Book  MATH  Google Scholar 

  54. Rodríguez H, Astolfi A, Ortega R (2006) On the construction of static stabilizers and static output trackers for dynamically linearizable systems, related results and applications. Int J Control 79:1523–1537

    Google Scholar 

  55. Ryan EP (1994) On Brockett's condition for smooth stabilizability and its necessity in a context of nonsmooth feedback. SIAM J Control Optim 32:1597–1604

    Article  MathSciNet  MATH  Google Scholar 

  56. Sontag ED (1983) A Lyapunov-like characterization of asymptotic controllability. SIAM J Control Optim 21:462–471

    Article  MathSciNet  MATH  Google Scholar 

  57. Sontag ED (1999) Stability and stabilization: discontinuities and the effect of disturbances. In: Clarke FH, Stern RJ (eds) Nonlinear Analysis, Differential Equations and Control, NATO ASI Montreal 1998. Kluwer, Dordrecht, pp 551–598

    Chapter  Google Scholar 

  58. Sontag ED, Sussmann HJ (1980) Remarks on continuous feedback. In: Proc IEEE Conf Decis and Control Albuq. IEEE Publications, Piscataway, pp 916–921

    Google Scholar 

  59. Subbotin AI (1995) Generalized Solutions of First-Order PDEs. Birkhäuser, Boston

    Google Scholar 

  60. Trélat E (2006) Singular trajectories and subanalyticity in optimal control and Hamilton–Jacobi theory. Rend Semin Mat Univ Politec Torino 64:97–109

    Google Scholar 

  61. Vinter RB (2000) Optimal Control. Birkhäuser, Boston

    MATH  Google Scholar 

  62. Warga J (1972) Optimal Control of Differential and Functional Equations. Academic Press, New York

    MATH  Google Scholar 

  63. Wolenski PR, Zhuang Y (1998) Proximal analysis and the minimal time function. SIAM J Control Optim 36:1048–1072

    Article  MathSciNet  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2012 Springer-Verlag

About this entry

Cite this entry

Clarke, F. (2012). Nonsmooth Analysis in Systems and Control Theory. In: Meyers, R. (eds) Mathematics of Complexity and Dynamical Systems. Springer, New York, NY. https://doi.org/10.1007/978-1-4614-1806-1_69

Download citation

Publish with us

Policies and ethics