Opportunities for multiagent systems and multiagent reinforcement learning in traffic control

Article

Abstract

The increasing demand for mobility in our society poses various challenges to traffic engineering, computer science in general, and artificial intelligence and multiagent systems in particular. As it is often the case, it is not possible to provide additional capacity, so that a more efficient use of the available transportation infrastructure is necessary. This relates closely to multiagent systems as many problems in traffic management and control are inherently distributed. Also, many actors in a transportation system fit very well the concept of autonomous agents: the driver, the pedestrian, the traffic expert; in some cases, also the intersection and the traffic signal controller can be regarded as an autonomous agent. However, the “agentification” of a transportation system is associated with some challenging issues: the number of agents is high, typically agents are highly adaptive, they react to changes in the environment at individual level but cause an unpredictable collective pattern, and act in a highly coupled environment. Therefore, this domain poses many challenges for standard techniques from multiagent systems such as coordination and learning. This paper has two main objectives: (i) to present problems, methods, approaches and practices in traffic engineering (especially regarding traffic signal control); and (ii) to highlight open problems and challenges so that future research in multiagent systems can address them.

Keywords

Multiagent systems Multiagent learning Reinforcement learning Coordination of agents Game-theory Traffic signal control 

References

  1. 1.
    Balan G. and Luke S. (2006). History-based traffic control. In: Nakashima, H., Wellman, M.P., Weiss, G. and Stone, P. (eds) Proceedings of the 5th International Joint Conference on Autonomous Agents and Multiagent Systems, pp 616–621. ACM Press, New York CrossRefGoogle Scholar
  2. 2.
    Balmer, M., Cetin, N., Nagel, K., & Raney, B. (2004). Towards truly agent-based traffic and mobility simulations. In N. Jennings, C. Sierra, L. Sonenberg, & M. Tambe (Eds.), Proceedings of the 3rd International Joint Conference on Autonomous Agents and Multi Agent Systems, AAMAS, July 2004 (Vol. 1, pp. 60–67). New York: IEEE Computer Society.Google Scholar
  3. 3.
    Bazzan, A. L. C. (1995). A game-theoretic approach to distributed control of traffic signals. In Proceedings of the 1st International Conference on Multi-Agent Systems (ICMAS) (p. 439, extended abstract). San Francisco.Google Scholar
  4. 4.
    Bazzan, A. L. C. (1997). An evolutionary game-theoretic approach for coordination of traffic signal agents. PhD thesis, University of Karlsruhe.Google Scholar
  5. 5.
    Bazzan A.L.C. (2005). A distributed approach for coordination of traffic signal agents. Autonomous Agents and Multiagent Systems 10(1): 131–164 CrossRefGoogle Scholar
  6. 6.
    Bazzan, A. L. C., de Oliveira, D., & da Silva, B. C. (2008). Learning in groups of traffic signals. Technical report, UFRGS.Google Scholar
  7. 7.
    Bazzan A.L.C., Klügl F., Nagel K. and Oliveira D. (2008). Adapt or not to adapt—Consequences of adapting driver and traffic light agents. In: Tuyls, K., Nowe, A., Guessoum, Z., and Kudenko, D. (eds) Adaptive agents and multi-agent systems III, Lecture notes in artificial intelligence (Vol 4865), pp 1–14. Springer-Verlag, New York Google Scholar
  8. 8.
    Bazzan A.L.C. and Junges R. (2006). Congestion tolls as utility alignment between agent and system optimum. In: Nakashima, H., Wellman, M.P., Weiss, G. and Stone, P. (eds) Proceedings of the 5th International Joint Conference on Autonomous Agents and Multiagent Systems, May 2006, pp 126–128. ACM Press, New York CrossRefGoogle Scholar
  9. 9.
    Bazzan A.L.C. and Klügl F. (2005). Case studies on the Braess paradox: Simulating route recommendation and learning in abstract and microscopic models. Transportation Research C 13(4): 299–319 CrossRefGoogle Scholar
  10. 10.
    Bazzan A.L.C. and Klügl F. (2008). Re-routing agents in an abstract traffic scenario. In: Zaverucha, G. and da Costa, A.L. (eds) Advances in artificial intelligence, Lecture notes in artificial intelligence (Vol. 5249), pp 63–72. Springer-Verlag, Berlin Google Scholar
  11. 11.
    Bazzan A.L.C., Klügl F. and Nagel K. (2007). Adaptation in games with many co-evolving agents. In: Neves, J., Santos, M., and Machado, J. (eds) EPIA 2007, Lecture notes in artificial intelligence (Vol. 4874), pp 195–206. Springer-Verlag, Berlin Google Scholar
  12. 12.
    Bazzan, A. L. C., Wahle, J., & Klügl, F. (1999). Agents in traffic modelling—From reactive to social behavior. In Advances in artificial intelligence, Lecture notes in artificial intelligence (Vol. 1701, pp. 303–306). Berlin/Heidelberg: Springer. Extended version appeared in Proceedings of the U.K. Special Interest Group on Multi-Agent Systems (UKMAS), Bristol, UK.Google Scholar
  13. 13.
    Bowling M.H. and Veloso M.M. (2001). Rational and convergent learning in stochastic games. In: Nebel, B. (eds) Proceedings of the 17th International Joint Conference on Artificial Intelligence., pp 1021–1026. Morgan Kaufmann, Seattle Google Scholar
  14. 14.
    Braess D. (1968). über ein Paradoxon aus der Verkehrsplanung. Unternehmensforschung 12: 258 MATHCrossRefMathSciNetGoogle Scholar
  15. 15.
    Brockfeld E., Barlovic R., Schadschneider A. and Schreckenberg M. (2001). Optimizing traffic lights in a cellular automaton model for city traffic. Physical Review E 64(5): 056132 CrossRefGoogle Scholar
  16. 16.
    Bull L., Sha’Aban J., Tomlinson A., Addison J.D. and Heydecker B.G. (2004). Towards distributed adaptive control for road traffic junction signals using learning classifier systems. In: Bull, L. (eds) Applications of learning classifier systems, Studies in fuzziness and soft computing (Vol. 150), pp 276–299. Springer, New York Google Scholar
  17. 17.
    Burmeister B., Doormann J. and Matylis G. (1997). Agent-oriented traffic simulation. Transactions Society for Computer Simulation 14: 79–86 Google Scholar
  18. 18.
    Camponogara E. and Kraus W. (2003). Distributed learning agents in urban traffic control. In: Moura-Pires, F. and Abreu, S. (eds) EPIA., pp 324–335. Portugal, Beja Google Scholar
  19. 19.
    Choi S.P.M., Yeung D.-Y. and Zhang N.L. (2001). Hidden-mode markov decision processes for nonstationary sequential decision making. In: Sun, R. and Giles, C.L. (eds) Sequence learning: paradigms, algorithms, and applications, pp 264–287. Springer, Berlin Google Scholar
  20. 20.
    Chowdhury D., Santen L. and Schadschneider A. (2000). Statistical physics of vehicular traffic and some related systems. Physics Reports 329: 199–329 CrossRefMathSciNetGoogle Scholar
  21. 21.
    Chowdhury D. and Schadschneider A. (1999). Self-organization of traffic jams in cities: Effects of stochastic dynamics and signal periods. Physical Review E 59(2): R1311–R1314 CrossRefGoogle Scholar
  22. 22.
    Claus, C., & Boutilier, C. (1998). The dynamics of reinforcement learning in cooperative multiagent systems. In Proceedings of the 15th National Conference on Artificial Intelligence (pp. 746–752). Madison, Wisconsin.Google Scholar
  23. 23.
    Di Taranto, M. (1989). UTOPIA. In Proceedings of the IFAC-IFIP-IFORS Conference on Control, Computers, Communication in Transportation, Paris. ifac.Google Scholar
  24. 24.
    Diakaki C., Papageorgiou M. and Aboudolas K. (2002). A multivariable regulator approach to traffic-responsive network-wide signal control. Control Engineering Practice 10(2): 183–195 CrossRefGoogle Scholar
  25. 25.
    Doya K., Samejima K., Katagiri K. and Kawato M. (2002). Multiple model-based reinforcement learning. Neural Computation 14(6): 1347–1369 MATHCrossRefGoogle Scholar
  26. 26.
    Dresner K. and Stone P. (2004). Multiagent traffic management: A reservation-based intersection control mechanism. In: Jennings, N., Sierra, C., Sonenberg, L. and Tambe, M. (eds) The 3rd International Joint Conference on Autonomous Agents and Multiagent Systems, July 2004, pp 530–537. IEEE Computer Society, New York Google Scholar
  27. 27.
    Dresner K. and Stone P. (2005). Multiagent traffic management: An improved intersection control mechanism. In: Dignum, F., Dignum, V., Koenig, S., Kraus, S., Singh, M.P. and Wooldridge, M. (eds) The 4th International Joint Conference on Autonomous Agents and Multiagent Systems, July 2005, pp. ACM Press, New York Google Scholar
  28. 28.
    Dresner K. and Stone P. (2006). Multiagent traffic management: Opportunities for multiagent learning. In: Tuyls, K., Hoen, P.J., Verbeeck, K. and Sen, S. (eds) LAMAS 2005, Lecture notes in artificial intelligence (Vol 3898)., pp 129–138. Springer Verlag, Berlin Google Scholar
  29. 29.
    Dresner, K., Stone, P. (2007). Sharing the road: Autonomous vehicles meet human drivers. In The 20th International Joint Conference on Artificial Intelligence, January 2007 (pp. 1263–1268). Hyderabad, India.Google Scholar
  30. 30.
    Elhadouaj, S., Drogoul, A., & Espié, S. (2000). How to combine reactivity and anticipation: The case of conflicts resolution in a simulated road traffic. In Proceedings of the Multiagent Based Simulation (MABS) (pp. 82–96). New York: Springer.Google Scholar
  31. 31.
    France, J., & Ghorbani, A. A. (2003). A multiagent system for optimizing urban traffic. In Proceedings of the IEEE/WIC International Conference on Intelligent Agent Technology (pp. 411–414). Washington, DC: IEEE Computer Society.Google Scholar
  32. 32.
    Gartner N.H. (1983). OPAC—A demand-responsive strategy for traffic signal control. Transportation Research Record 906: 75–81 Google Scholar
  33. 33.
    Gershenson C. (2005). Self-organizing traffic lights. Complex Systems 16(1): 29–53 Google Scholar
  34. 34.
    Gordon D. (1996). The organization of work in social insect colonies. Nature 380: 121–124 CrossRefGoogle Scholar
  35. 35.
    Hall R.W. (2003). Handbook of transportation science 2nd ed. Kluwer Academic Pub., Dordrecht MATHGoogle Scholar
  36. 36.
    Haugeneder, H., & Steiner, D. (1993). MECCA/UTS: A multi-agent scenario for cooperation in urban traffic. In Proceedings of the Special Interest Group on Cooperating Knowledge Based Systems (pp. 83–98). Keele, UK.Google Scholar
  37. 37.
    Helbing D. and Huberman B.A. (1998). Coherent moving states in highway traffic. Nature 396: 738 CrossRefGoogle Scholar
  38. 38.
    Henry, J., Farges, J. L., & Tuffal, J. (1983). The PRODYN real time traffic algorithm. In Proceedings of the International Federation of Automatic Control (IFAC) Conference, Baden-Baden: IFAC.Google Scholar
  39. 39.
    Hu, J., & Wellman, M. P. (1998). Multiagent reinforcement learning: Theoretical framework and an algorithm. In Proceedings of the 15th International Conference on Machine Learning (pp. 242–250). Los Altos: Morgan Kaufmann.Google Scholar
  40. 40.
    Hunt, P. B., Robertson, D. I., Bretherton, R. D., & Winton, R. I. (1981). SCOOT—A traffic responsive method of coordinating signals. TRRL Lab. Report 1014, Transport and Road Research Laboratory, Berkshire, 1981.Google Scholar
  41. 41.
    Klügl F. and Bazzan A.L.C. (2004). Simulated route decision behaviour: Simple heuristics and adaptation. In: Selten, R. and Schreckenberg, M. (eds) Human behaviour and traffic networks., pp 285–304. Springer, New York Google Scholar
  42. 42.
    Klügl, F., Bazzan, A. L. C., & Wahle, J. (2003). Selection of information types based on personal utility—A testbed for traffic information markets. In Proceedings of the 2nd International Joint Conference on Autonomous Agents and Multi-Agent Systems (AAMAS) (pp. 377–384). Melbourne, Australia: ACM Press.Google Scholar
  43. 43.
    Köhler E., Möhring R.H. and Wünsch G. (2004). Minimizing total delay in fixed-time controlled traffic networks. In: Fleuren, H., den Hertog, D., and Kort, P. (eds) Proceedings of Operations Research (OR), Operations Research Proceedings., pp 192. Springer, Tilburg Google Scholar
  44. 44.
    Kosonen I. (2003). Multi-agent fuzzy signal control based on real-time simulation. Transportation Research C 11(5): 389–403 CrossRefGoogle Scholar
  45. 45.
    Leutzbach W. (1988). Introduction to the theory of traffic flow. Springer, Berlin Google Scholar
  46. 46.
    Littman, M. L. (1994). Markov games as a framework for multi-agent reinforcement learning. In Proceedings of the 11th International Conference on Machine Learning, ML (pp. 157–163). New Brunswick, NJ: Morgan Kaufmann.Google Scholar
  47. 47.
    Lowrie, P. (1982). The Sydney coordinate adaptive traffic system—Principles, methodology, algorithms. In Proceedings of the International Conference on Road Traffic Signalling, Sydney, Australia.Google Scholar
  48. 48.
    Möhring, R. H., Nökel, K., & Wünsch, G. (2006). A model and fast optimization method for signal coordination in a network. In Proceedings of the 11th IFAC Symposium on Control in Transportation Systems, August 2006. Delft, The Netherlands.Google Scholar
  49. 49.
    Moore A.W. and Atkeson C.G. (1993). Prioritized sweeping: Reinforcement learning with less data and less time. Machine Learning 13: 103–130 Google Scholar
  50. 50.
    Morgan J.T. and Little J.D.C. (1964). Synchronizing traffic signals for maximal bandwidth. Operations Research 12: 897–912 CrossRefGoogle Scholar
  51. 51.
    Nagel K. and Schreckenberg M. (1992). A cellular automaton model for freeway traffic. Journal de Physique I 2: 2221 CrossRefGoogle Scholar
  52. 52.
    Nunes, L., & Oliveira, E. C. (2004). Learning from multiple sources. In N. Jennings, C. Sierra, L. Sonenberg, & M. Tambe (Eds.), Proceedings of the 3rd International Joint Conference on Autonomous Agents and Multi Agent Systems, AAMAS, July 2004 (Vol. 3, pp. 1106–1113). New York: IEEE Computer Society.Google Scholar
  53. 53.
    Oliveira D. and Bazzan A.L.C. (2006). Traffic lights control with adaptive group formation based on swarm intelligence. In: Dorigo, M., Gambardella, L.M., Birattari, M., Martinoli, A., Poli, R. and Stuetzle, T. (eds) Proceedings of the 5th International Workshop on Ant Colony Optimization and Swarm Intelligence, ANTS 2006, Lecture notes in computer science, September 2006., pp 520–521. Springer, Berlin Google Scholar
  54. 54.
    Oliveira, D., Bazzan, A. L. C., & Lesser, V. (2005). Using cooperative mediation to coordinate traffic lights: A case study. In Proceedings of the 4th International Joint Conference on Autonomous Agents and Multi Agent Systems (AAMAS), July 2005 (pp. 463–470). New York: IEEE Computer Society.Google Scholar
  55. 55.
    Oliveira, D., Ferreira, P. R., Jr., Bazzan, A. L. C., & Klügl, F. (2004). A swarm-based approach for selection of signal plans in urban scenarios. In Proceedings of 4th International Workshop on Ant Colony Optimization and Swarm Intelligence—ANTS 2004, Lecture notes in computer science (Vol. 3172, pp. 416–417). Berlin, Germany.Google Scholar
  56. 56.
    Oppenheim N. (1995). Urban travel demand modeling: From individual choices to general equilibrium. Wiley, New York, NY Google Scholar
  57. 57.
    Ossowski S., Fernández A., Serrano J.M., Pérez-de-la-Cruz J.L., Belmonte M.V. and Hernández J.Z. (2005). Designing multiagent decision support systems for traffic management. In: Klügl, F., Bazzan, A.L.C., and Ossowski, S. (eds) Applications of agent technology in traffic and transportation, Whitestein series in software agent technologies and autonomic computing., pp 51–67. Birkhäuser, Basel Google Scholar
  58. 58.
    Panait L. and Luke S. (2005). Cooperative multi-agent learning: The state of the art. Autonomous Agents and Multi-Agent Systems 11(3): 387–434 CrossRefGoogle Scholar
  59. 59.
    Papageorgiou M. (2003). Traffic control. In: Hall, R.W. (eds) Handbook of transportation science (Chap. 8)., pp 243–277. Kluwer Academic Pub, Dordrecht Google Scholar
  60. 60.
    Papageorgiou M., Diakaki C., Dinopoulou V., Kotsialos A. and Wang Y. (2003). Review of road traffic control strategies. Proceedings of the IEEE 91(12): 2043–2067 CrossRefGoogle Scholar
  61. 61.
    Paruchuri, P., Pullalarevu, A. R., & Karlapalem, K. (2002). Multi agent simulation of unorganized traffic. In Proceedings of the 1st International Joint Conference on Autonomous Agents and Multi-Agent Systems (AAMAS) (Vol. 1, pp. 176–183). Bologna, Italy: ACM Press.Google Scholar
  62. 62.
    Rigolli M. and Brady M. (2005). Towards a behavioural traffic monitoring system. In: Dignum, F., Dignum, V., Koenig, S., Kraus, S., Singh, M.P. and Wooldridge, M. (eds) Proceedings of the 4th International Joint Conference on Autonomous Agents and Multiagent Systems., pp 449–454. ACM Press, New York Google Scholar
  63. 63.
    Robertson, D. I. (1969). TRANSYT: A traffic network study tool. Rep. LR 253, Road Res. Lab., London.Google Scholar
  64. 64.
    Robinson G.E. (1992). Regulation of division of labor in insect societies. Annual Review of Entomology 37: 637–665 CrossRefGoogle Scholar
  65. 65.
    Rochner F., Prothmann H., Branke J., Müller-Schloer C. and Schmeck H. (2006). An organic architecture for traffic light controllers. In: Hochberger, C. and Liskowsky, R. (eds) Proceedings of the Informatik 2006—Informatik für Menschen, Lecture notes in informatics (Vol P-93)., pp 120–127. Köllen Verlag, Berlin Google Scholar
  66. 66.
    Roess R.P., Prassas E.S. and McShane W.R. (2004). Traffic engineering. Prentice Hall, Englewood Cliffs, NJ Google Scholar
  67. 67.
    Rossetti R. and Liu R. (2005). A dynamic network simulation model based on multi-agent systems. In: Klügl, F., Bazzan, A.L.C. and Ossowski, S. (eds) Applications of agent technology in traffic and transportation, Whitestein series in software agent technologies and autonomic computing. pp 181–192. Birkhäuser, Basel Google Scholar
  68. 68.
    Rossetti R.J.F., Bordini R.H., Bazzan A.L.C., Bampi S., Liu R. and Van Vliet D. (2002). Using BDI agents to improve driver modelling in a commuter scenario. Transportation Research Part C: Emerging Technologies 10(5–6): 47–72 Google Scholar
  69. 69.
    Shoham, Y., Powers, R., & Grenager, T. (2003). Multi-agent reinforcement learning: A critical survey. Unpublished survey.Google Scholar
  70. 70.
    Shoham Y., Powers R. and Grenager T. (2007). If multi-agent learning is the answer, what is the question?. Artificial Intelligence 171(7): 365–377 CrossRefMathSciNetMATHGoogle Scholar
  71. 71.
    Silva B.C.d., Basso E.W., Bazzan A.L.C. and Engel P.M. (2006). Dealing with non-stationary environments using context detection. In: Cohen, W.W. and Moore, A. (eds) Proceedings of the 23rd International Conference on Machine Learning ICML, June 2006, pp 217–224. ACM Press, New York CrossRefGoogle Scholar
  72. 72.
    Silva, B. C. d., Oliveira, D. d., Bazzan, A. L. C., & Basso, E. W. (2006). Adaptive traffic control with reinforcement learning. In A. L. C. Bazzan, B. Chaib-Draa, F. Klügl, & S. Ossowski (Eds.), Proceedings of the 4th Workshop on Agents in Traffic and Transportation (at AAMAS 2006), May 2006 (pp. 80–86). Hakodate, Japan.Google Scholar
  73. 73.
    Steingrover, M., Schouten, R., Peelen, S., Nijhuis, E., & Bakker, B. (2005). Reinforcement learning of traffic light controllers adapting to traffic congestion. In K. Verbeeck, K. Tuyls, A. Nowé, B. Manderick, & B. Kuijpers (Eds.), Proceedings of the 17th Belgium-Netherlands Conference on Artificial Intelligence (BNAIC 2005), October 2005 (pp. 216–223). Brussels, Belgium: Koninklijke Vlaamse Academie van Belie voor Wetenschappen en Kunsten.Google Scholar
  74. 74.
    Stone, P. (2007). Learning and multiagent reasoning for autonomous agents. In The 20th International Joint Conference on Artificial Intelligence, January 2007 (pp. 13–30).Google Scholar
  75. 75.
    Stone P. (2007). Multiagent learning is not the answer. It is the question. Artificial Intelligence 171(7): 402–405 MATHGoogle Scholar
  76. 76.
    Sutton, R. S. (1990). Integrated architectures for learning, planning, and reacting based on approximating dynamic programming. In Proceedings of the 7th International Conference on Machine Learning (pp. 216–224). Austin, Texas.Google Scholar
  77. 77.
    Tan, M. (1993). Multi-agent reinforcement learning: Independent vs. cooperative agents. In Proceedings of the 10th International Conference on Machine Learning (ICML 1993), June 1993 (pp. 330–337). Los Altos, CA: Morgan Kaufmann.Google Scholar
  78. 78.
    TRANSYT-7F. (1988). TRANSYT-7F user’s manual. Transportation Research Center, University of Florida.Google Scholar
  79. 79.
    Tumer, K., Welch, Z. T., & Agogino, A. (2008). Aligning social welfare and agent preferences to alleviate traffic congestion. In L. Padgham, D. Parkes, J. Müller, & S. Parsons (Eds.), Proceedings of the 7th International Conference on Autonomous Agents and Multiagent Systems, May 2008 (pp. 655–662). Estoril: IFAAMAS.Google Scholar
  80. 80.
    Tumer K. and Wolpert D. (2004). A survey of collectives. In: Tumer, K. and Wolpert, D. (eds) Collectives and the design of complex systems., pp 1–42. Springer, New York Google Scholar
  81. 81.
    Tuyls, K. (2004). Learning in multi-agent systems, an evolutionary game theoretic approach. PhD thesis, Vrije Universiteit Brussel.Google Scholar
  82. 82.
    Tuyls K., Hoen P.J. and Vanschoenwinkel B. (2006). An evolutionary dynamical analysis of multi-agent learning in iterated games. Autonomous Agents and Multiagent Systems 12(1): 115–153 CrossRefGoogle Scholar
  83. 83.
    Tuyls K. and Parsons S. (2007). What evolutionary game theory tells us about multiagent learning. Artificial Intelligence 171(7): 406–416 CrossRefMathSciNetMATHGoogle Scholar
  84. 84.
    van Katwijk, R. T., van Koningsbruggen, P., Schutter, B. D., & Hellendoorn, J. (2005). A test bed for multi-agent control systems in road traffic management. In F. Klügl, A. L. C. Bazzan, & S. Ossowski (Eds.), Applications of agent technology in traffic and transportation, Whitestein series in software agent technologies and autonomic computing (pp. 113–131). Basel: Birkhäuser.Google Scholar
  85. 85.
    Verbeeck, K., Nowé, A., Peeters, M., & Tuyls, K. (2005). Multi-agent reinforcement learning in stochastic games and multi-stage games. In D. K. et al. (Eds.), Adaptive agents and MAS II, LNAI (Vol. 3394, pp. 275–294). Berlin: Springer.Google Scholar
  86. 86.
    Vu T., Powers R. and Shoham Y. (2006). Learning against multiple opponents. In: Nakashima, H., Wellman, M.P., Weiss, G., and Stone, P. (eds) Proceedings of the 5th International Joint Conference on Autonomous Agents and Multiagent Systems., pp 752–760. Hakodate, Japan CrossRefGoogle Scholar
  87. 87.
    Wahle J., Bazzan A.L.C. and Kluegl F. (2002). The impact of real time information in a two route scenario using agent based simulation. Transportation Research Part C: Emerging Technologies 10(5–6): 73–91 Google Scholar
  88. 88.
    Wahle J., Bazzan A.L.C., Klügl F. and Schreckenberg M. (2000). Anticipatory traffic forecast using multi-agent techniques. In: Helbing, D., Hermann, H.J., Schreckenberg, M., and Wolf, D. (eds) Traffic and granular flow., pp 87–92. Springer, New york Google Scholar
  89. 89.
    Wahle J., Bazzan A.L.C., Klügl F. and Schreckenberg M. (2000). Decision dynamics in a traffic scenario. Physica A 287(3–4): 669–681 CrossRefGoogle Scholar
  90. 90.
    Wardrop, J. G. (1952). Some theoretical aspects of road traffic research. In Proceedings of the Institute of Civil Engineers (Vol. 2, pp. 325–378). UK.Google Scholar
  91. 91.
    Watkins C.J.C.H. and Dayan P. (1992). Q-learning. Machine Learning 8(3): 279–292 MATHGoogle Scholar
  92. 92.
    Wiering, M. (2000). Multi-agent reinforcement learning for traffic light control. In Proceedings of the 17th International Conference on Machine Learning (ICML 2000) (pp. 1151–1158). Stanford.Google Scholar

Copyright information

© Springer Science+Business Media, LLC 2008

Authors and Affiliations

  1. 1.Instituto de InformáticaUniversidade Federal do Rio Grande do SulPorto AlegreBrazil

Personalised recommendations