Skip to main content
Log in

Adaptive actor-critic learning for the control of mobile robots by applying predictive models

  • Original Paper
  • Published:
Soft Computing Aims and scope Submit manuscript

Abstract

In this paper, we propose two methods of adaptive actor-critic architectures to solve control problems of nonlinear systems. One method uses two actual states at time k and time k+1 to update the learning algorithm. The basic idea of this method is that the agent can directly take some knowledge from the environment to improve its knowledge. The other method only uses the state at time k to update the algorithm. This method is called, learning from prediction (or simulated experience). Both methods include one or two predictive models, which are assumed to be applied to construct predictive states and a model-based actor (MBA). Here, the MBA as an actor can be viewed as a network where the connection weights are the elements of the feedback gain matrix. In the critic part, two value-functions are realized as a pure static mapping, which can be reduced to a nonlinear current estimator by using the radial basis function neural networks (RBFNNs). Simulation results obtained for a dynamical model of nonholonomic mobile robots with two independent driving wheels are presented. They show the effectiveness of the proposed approaches for the trajectory tracking control problem.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15

Similar content being viewed by others

References

  • Sutton RS, Barto AG (1999) Reinforcement learning, an introduction. MIT Press, Cambridge

    Google Scholar 

  • van Buijtenen WM, Schram G, Babuška R, Verbruggen HB (1998) Adaptive fuzzy control of satellite attitude by reinforcement learning. IEEE Trans Fuzzy Syst 6(2):185–194

    Google Scholar 

  • Berenji HR, Khedkar P (1992) Learning and tuning fuzzy logic controllers through reinforcements. IEEE Trans Neural Netw 3(5):724–740

    Google Scholar 

  • Barto AG, Sutton RS, Anderson W (1983) Neuronlike adaptive elements can solve difficult learning control problems. IEEE Trans Syst Man Cybern 13(5):834–846

    Google Scholar 

  • Juang CF, Lin JY, Lin CT (2000) Genetic reinforcement learning through symbiotic evolution for fuzzy controller design. IEEE Trans Syst Man Cybern B 30(2):290–302

    Google Scholar 

  • Kandadai RM, Tien JM (1997) A knowledge-base generating hierarchical fuzzy-neural control. IEEE Trans Neural Netw 8(6):1531–1541

    Google Scholar 

  • Chiang CK, Chung HY, Lin JJ (1997) A self-learning fuzzy logic controller using genetic algorithms with reinforcements. IEEE Trans Fuzzy Syst 5(3):460–467

    Google Scholar 

  • Doya K (2000) Reinforcement learning in continuous time and space. Neural Comput 12(1):219–245

    Google Scholar 

  • Watanabe K (1992) Adaptive estimation and control. Prentice Hall, London

    Google Scholar 

  • Watanabe K, Syam R, Izumi K, Kiguchi K (2001) Adaptive actor-critic designs with current estimated or predicted value-function. In: Proceedings of the 5th international conference on KES2001, Osaka, pp 1308–1318

  • Prokhorov DV, Wunch DC (1997) Adaptive critic designs. IEEE Trans Neural Netw 8(5):997–1007

    Google Scholar 

  • Fierro R, Lewis FL (1998) Control of nonholonomic mobile robot using neural networks. IEEE Trans Neural Netw 9(4):589–600

    Google Scholar 

  • Watanabe K, Tang J, Nakamura M, Koga S, Fukuda T (1996) A fuzzy-Gaussian neural network and its application to mobile robot. IEEE Trans Control Syst Technol 4(2):193–199

    Google Scholar 

  • Syam R, Watanabe K, Izumi K, Kiguchi K (2001a) Adaptive actor-critic design using predictive model and its application to nonholonomic mobile robot. In: Proceedings of the 5th international conferenec on KES2001, Osaka, pp 1319–1324

  • Syam R, Watanabe K, Izumi K, Kiguchi K (2001b) Adaptive actor-critic learning of mobile robot using actual and simulated experiences. In: Proceedings of the international conference on ICCAS2001, Cheju, pp 312–316

  • Syam R, Watanabe K, Izumin K, Kiguchi K (2002) Control of nonholonomic mobile robot by an adaptive actor-critic method with simulated experience based value-functions. In: Proceedings of conference on IEEE-ICRA2002, Washington, pp 3960–3965

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Keigo Watanabe.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Syam, R., Watanabe, K. & Izumi, K. Adaptive actor-critic learning for the control of mobile robots by applying predictive models. Soft Comput 9, 835–845 (2005). https://doi.org/10.1007/s00500-004-0424-1

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00500-004-0424-1

Keywords

Navigation