Advertisement

An Analysis of the Pheromone Q-Learning Algorithm

  • Ndedi Monekosso
  • Paolo Remagnino
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 2527)

Abstract

The Phe-Q machine learning technique, a modified Q-learning technique, was developed to enable co-operating agents to communicate in learning to solve a problem. The Phe-Q learning technique combines Q-learning with synthetic pheromone to improve on the speed of convergence. The Phe-Q update equation includes a belief factor that reflects the confidence the agent has in the pheromone (the communication) deposited in the environment by other agents. With the Phe-Q update equation, speed of convergence towards an optimal solution depends on a number parameters including the number of agents solving a problem, the amount of pheromone deposited, and the evaporation rate. In this paper, work carried out to optimise speed of learning with the Phe-Q technique is described. The objective was to to optimise Phe-Q learning with respect to pheromone deposition rates, evaporation rates.

Keywords

Evaporation Rate Travel Salesman Problem Pheromone Trail Synthetic Pheromone Belief Factor 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    R. Beckers, J. L. Deneubourg, S. Goss, and J. M. Pasteels. Collective decision making through food recruitment. Ins. Soc., 37:258–267, 1990.CrossRefGoogle Scholar
  2. 2.
    R. Beckers, J.L. Deneubourg, and S. Goss. Trails and u-turns in the selection of the shortest path by the ant lasius niger. Journal of Theoretical Biology, 159:397–4151, 1992.CrossRefGoogle Scholar
  3. 3.
    D.P. Bertsekas and J.N. Tsitsiklis. Neuro-Dynamic Programming. Athena Scientific, 1996.Google Scholar
  4. 4.
    E. Bonabeau, M. Dorigo, and G. Theraulaz. Swarm intelligence, From Natural to Artificial Systems. Oxford University Press, 1999.Google Scholar
  5. 5.
    M. C. Cammaerts-Tricot. Piste et pheromone attraction chez la fourmi myrmica ruba. Journal of Computational Physiology, 88:373–382, 1974.CrossRefGoogle Scholar
  6. 6.
    G. Di Caro and M. Dorigo. Antnet: a mobile agents approach to adaptive routing. Technical Report: IRIDIA/97-12, Universite Libre de Bruxelles, Belgium. http://citeseer.nj.nec.com/dicaro97antnet.html.
  7. 7.
    A. Colorni, M. Dorigo, and V. Maniezzo. Ant system for job-shop scheduling. Belgian Journal of OR, statistics and computer science, 34:39–53, 1993.Google Scholar
  8. 8.
    A. Colorni, M. Dorigo, and G. Theraulaz. Distributed optimzation by ant colonies. In Proceedings First European Conf. on Artificial Life, pages 134–142, 1991.Google Scholar
  9. 9.
    J.L. Deneubourg and S. Goss. Collective patterns and decision making. Ethol. Ecol. and Evol., 1:295–311, 1993.Google Scholar
  10. 10.
    M. Dorigo and L. M. Gambardella. Ant colony system: A cooperative learning approach to the travelling salesman problem. IEEE Trans. on Evol. Comp., 1:53–66, 1997.CrossRefGoogle Scholar
  11. 11.
    M. Dorigo, V. Maniezzo, and A. Colorni. The ant system: Optimization by a colony of cooperatin agents. IEEE Trans. on Systems, Man, and Cybernetics, 26:1–13, 1996.Google Scholar
  12. 12.
    L. M. Gambardella and M. Dorigo. Ant-q:A reinforcement learning approach to the traveling salesman problem. In Proc. 12Th ICML, pages 252–260, 1995.Google Scholar
  13. 13.
    L. M. Gambardella, E. D. Taillard, and M. Dorigo. Ant colonies for the qap. Journal of Operational Research society, 1998.Google Scholar
  14. 14.
    S. Goss, S. Aron, J.L. Deneubourg, and J. M. Pasteels. Self-organized shorcuts in the argentine ants. Naturwissenschaften, pages 579–581, 1989.Google Scholar
  15. 15.
    L. R. Leerink, S. R. Schultz, and M. A. Jabri. A reinforcement learning exploration strategy based on ant foraging mechanisms. In Proc. 6Th Australian Conference on Neural Nets, 1995.Google Scholar
  16. 16.
    N. Monekosso and P. Remagnino. Phe-q:Apheromone based q-learning. In AI2001:Advances in Artificial Intelligence, 14Th Australian Joint Conf. on A.I., pages 345–355, 2001.Google Scholar
  17. 17.
    H. Van Dyke Parunak and S. Brueckner. Ant-like missionnaries and cannibals: Synthetic pheromones for distributed motion control. In Proc. of ICMAS’00, 2000.Google Scholar
  18. 18.
    H. Van Dyke Parunak, S. Brueckner, J. Sauter, and J. Posdamer. Mechanisms and military applications for synthetic pheromones. In Proc. 5Th International Conference Autonomous Agents, Montreal, Canada, 2001.Google Scholar
  19. 19.
    R. S. Sutton and A.G. Barto. Reinforcement Learning. MIT Press, 1998.Google Scholar
  20. 20.
    T. Jaakkola, M.I. Jordan, and S.P. Singh. On the convergence of stochastic iterative dynamic programming algorithms. Neural Computation, 6:1185–1201, 1994.zbMATHCrossRefGoogle Scholar
  21. 21.
    R. T. Vaughan, K. Stoy, G. S. Sukhatme, and M. J. Mataric. Whistling in the dark: Cooperative trail following in uncertain localization space. In Proc. 4Th International Conference on Autonomous Agents, Barcelona, Spain, 2000.Google Scholar
  22. 22.
    C. J. C. H. Watkins. Learning with delayed rewards. PhD thesis, University of Cambridge, 1989.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2002

Authors and Affiliations

  • Ndedi Monekosso
    • 1
  • Paolo Remagnino
    • 1
  1. 1.Digital Imaging Research CentreSchool of Computing and Information SystemsKingston UniversityUK

Personalised recommendations