Skip to main content

Tuning Fuzzy Controller Using Approximated Evaluation Function

  • Conference paper
Soft Computing as Transdisciplinary Science and Technology

Part of the book series: Advances in Soft Computing ((AINSC,volume 29))

  • 890 Accesses

Abstract

A fuzzy controller requires a control engineer to tune its fuzzy rules for a given problem to be solved. To reduce the burden, we develop a gradient-based tuning method for a fuzzy controller. The developed method is closely related to reinforcement learning, but takes advantages of a practical assumption made for faster learning. In reinforcement learning, values of problem states need to be learned through lots of trial-and-error interactions between the controller and the plant. And the plant dynamics should also be learned by the controller. In this research, we assume that an approximated value function of the problem states can be represented as a function of a Euclidean distance from a goal state and an action executed at the state. And, using it as an evaluation function, the fuzzy controller is tuned to have an optimal policy for reaching the goal state despite an unknown plant dynamics. Our experimental results on a pole-balancing problem show that the proposed method is efficient and effective in solving not only a set-point problem but also a tracking problem.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 169.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 219.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Hamid R. Berenji and Pratap Khedkar. Learning and tuning fuzzy logic controllers through reinforcement. IEEE Trans. On Neural Network, 3(5):724–740, 1992.

    Article  Google Scholar 

  2. Yul Y. Nazaruddin, Agus Naba, and The Houw Liong. Modified adaptive fuzzy control system using universal supervisory controller. In Proceedings SCI 2000/ISAS 2000, volume IX, Orlando, USA, July 23–26, 2000.

    Google Scholar 

  3. Juan C. Santamaria, Richard R. Sutton, and Ashwin Ram. Experiment with reinforcement learning in problems with continuous state and action spaces. Adaptive Behaviour, 6(2):163–218, 1998.

    Article  Google Scholar 

  4. Andrew G. Barto, Richard S. Sutton, and Charles W. Anderson. Neuronlike adaptive elements that can solve difficult learning control problems. IEEE Trans. On SMC, 13(5):834–846, 1983.

    Google Scholar 

  5. Lionel Jouffe. Fuzzy inference system learning by reinforcement method. IEEE Trans. On SMC-Part C: Application And Reviews, 28(3):338–355, 1998.

    Google Scholar 

  6. Augustine O. Esogbue, Warren E. Hearnes, and Q. Song. A reinforcement learning fuzzy controller for set-point regulator problems. In Proceedings of the FUZZ-IEEE’ 96 Conference, volume 3, pages 2136–2142, New Orleans, LA, 1996.

    Google Scholar 

  7. Richard S. Sutton. Learning to predict by the methods of temporal differences. Machine Learning, 3:9–44, 1988.

    Google Scholar 

  8. L.C. Baird and A.W. Moore. Gradient descent for general reinforcement learning. Advances in Neural Information Processing Systems 11, 1999.

    Google Scholar 

  9. Richard S. Sutton, David McAllester, Satinder Singh, and Yishay Mansour. Policy gradient methods for reinforcement learning with function approximation. Anvances in Neural Information Processing System, 12:1057–1063, 2000.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2005 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Naba, A., Miyashita, K. (2005). Tuning Fuzzy Controller Using Approximated Evaluation Function. In: Abraham, A., Dote, Y., Furuhashi, T., Köppen, M., Ohuchi, A., Ohsawa, Y. (eds) Soft Computing as Transdisciplinary Science and Technology. Advances in Soft Computing, vol 29. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-32391-0_19

Download citation

  • DOI: https://doi.org/10.1007/3-540-32391-0_19

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-25055-5

  • Online ISBN: 978-3-540-32391-4

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics