Skip to main content

Intelligent Traffic Control by Multi-agent Cooperative Q Learning (MCQL)

  • Conference paper
  • First Online:
Intelligent Computing and Information and Communication

Part of the book series: Advances in Intelligent Systems and Computing ((AISC,volume 673))

Abstract

Traffic crisis frequently happens because of traffic demands by the large number vehicles on the path. Increasing transportation move and decreasing the average waiting time of each vehicle are the objectives of cooperative intelligent traffic control system. Each signal wishes to catch better travel move. During the course, signals form a strategy of cooperation in addition to restriction for neighboring signals to exploit their individual benefit. A superior traffic signal scheduling strategy is useful to resolve the difficulty. The several parameters may influence the traffic control model. So it is hard to learn the best possible result. The lack of expertise of traffic light controllers to study from previous practice results makes them to be incapable of incorporating uncertain modifications of traffic flow. Defining instantaneous features of the real traffic scenario, reinforcement learning algorithm based traffic control model can be used to obtain fine timing rules. The projected real-time traffic control optimization model is able to continue with the traffic signal scheduling rules successfully. The model expands traffic value of the vehicle, which consists of delay time, the number of vehicles stopped at the signal, and the newly arriving vehicles to learn and establish the optimal actions. The experimentation outcome illustrates a major enhancement in traffic control, demonstrating the projected model is competent of making possible real-time dynamic traffic control.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 169.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 219.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. F. Zhu, J. Ning, Y. Ren, and J. Peng, “Optimization of image processing in video-based traffic monitoring,” ElektronikairElektrotechnika, vol.18, no.8, pp. 91–96, 2012.

    Google Scholar 

  2. B. de Schutter, “Optimal traffic light control for a single intersection,” in Proceedings of the American Control Conference (ACC ’99), vol. 3, pp. 2195–2199, June 1999.

    Google Scholar 

  3. N. Findler and J. Stapp,“A distributed approach to optimized control of street traffic signals,” Journal of Transportation Engineering, vol.118, no.1, pp. 99–110, 1992.

    Google Scholar 

  4. L. D. Baskar and H. Hellendoorn, “Traffic management for automated highway systems using model-based control,”IEEE Transactions on Intelligent Transportation Systems, vol. 3, no. 2, pp. 838–847, 2012.

    Google Scholar 

  5. R. S. Sutton and A. G. Barto, Reinforcement Learning: An Introduction, MIT Press, Cambridge, Mass, USA, 1998.

    Google Scholar 

  6. Artificial Intelligence in Transportation Information for Application, Transportation Research CIRCULAR, Number E–C 113, Transportation On Research Board of the National Academies, January 2007.

    Google Scholar 

  7. Deepak A. Vidhate, Parag Kulkarni “New Approach for Advanced Cooperative Learning Algorithms using RL methods (ACLA)” VisionNet’16 Proceedings of the Third International Symposium on Computer Vision and the Internet, ACM DL pp 12–20, 2016.

    Google Scholar 

  8. K. Mase and H. Yamamoto, “Advanced traffic control methods for network management,” IEEE Magazine, vol. 28, no. 10, pp. 82–88, 1990.

    Google Scholar 

  9. Deepak A. Vidhate, Parag Kulkarni “Innovative Approach Towards Cooperation Models for Multi-agent Reinforcement Learning (CMMARL)” in Smart Trends in Information Technology and Computer Communications, Springer Nature, Vol 628, pp 468–478, 2016.

    Google Scholar 

  10. L. D. Baskar, B. de Schutter, J. Hellendoorn, and Z. Papp, “Traffic control and intelligent vehicle highway systems: a survey,” IET Intelligent Transport Systems, vol. 5, no. 1, pp. 38–52, 2011.

    Google Scholar 

  11. M. Broucke “A theory of traffic flow in automated highway systems,” Transportation Research C, vol. 4, no. 4, pp. 181–210, 1996.

    Google Scholar 

  12. D. Helbing, A. Hennecke, V. Shvetsov, and M. Treiber, “Micro and macro-simulation of freeway traffic,” Mathematical and Computer Modelling, vol. 35, no. 5–6, pp. 517–547, 2002.

    Google Scholar 

  13. S. Zegeye, B. de Schutter, J. Hellendoorn, E. A. Breunesse, and A. Hegyi, “A predictive traffic controller for sustainable mobility using parameterized control policies,” IEEE Transactions on Intelligent Transportation Systems, vol. 13, no. 3, pp. 1420–1429, 2012.

    Google Scholar 

  14. Deepak A. Vidhate, Parag Kulkarni “Enhancement in Decision Making with Improved Performance by Multiagent Learning Algorithms” IOSR Journal of Computer Engineering, Volume 1, Issue 18, pp 18–25, 2016.

    Google Scholar 

  15. A. Bonarini and M. Restelli, “Reinforcement distribution in fuzzy Q-learning,” Fuzzy Sets and Systems, vol.160, no.10, pp. 1420–1443, 2009.

    Google Scholar 

  16. Y. K. Chin, Y. K. Wei, and K. T. K. Teo, “Qlearning traffic signal optimization within multiple intersections traffic network,” in Proceedings of the 6th UKSim/AMSS European Symposium on Computer Modeling and Simulation (EMS ’12), pp. 343–348, Nov 2012.

    Google Scholar 

  17. L.A. Prashanth and S. Bhatnagar, “Reinforcement learning with function approximation for traffic signal control,” IEEE Transactions on Intelligent Transportation Systems, vol. 12, no. 2, pp. 412–421, 2011.

    Google Scholar 

  18. Deepak A. Vidhate, Parag Kulkarni “Multilevel Relationship Algorithm for Association Rule Mining used for Cooperative Learning” in International Journal of Computer Applications (IJCA), Volume 86 Number 4- 2014 pp. 20–27.

    Google Scholar 

  19. Y. K. Chin, L. K. Lee, N. Bolong, S. S. Yang, and K. T. K. Teo, “Exploring Q-learning optimization in traffic signal timing plan management,” in Proceedings of the 3rd International Conference on Computational Intelligence, Communication Systems and Networks (CICSyN ’11), pp. 269–274, July 2011.

    Google Scholar 

  20. Deepak A. Vidhate, Parag Kulkarni “Multi-agent Cooperation Methods by Reinforcement Learning (MCMRL)”, Elsevier International Conference on Advanced Material Technologies (ICAMT)-2016}No. SS-LTMLBDA-06-05, 2016.

    Google Scholar 

  21. S. Russell and P. Norvi, Artificial Intelligence: A Modern Approach, PHI, 2009.

    Google Scholar 

  22. Deepak A. Vidhate, Parag Kulkarni “Performance enhancement of cooperative learning algorithms by improved decision making for context based application”, International Conference on Automatic Control and Dynamic Optimization Techniques (ICACDOT) IEEE Xplorer, pp 246–252, 2016.

    Google Scholar 

  23. Deepak A. Vidhate, Parag Kulkarni “Improvement In Association Rule Mining By Multilevel Relationship algorithm” in International Journal of Research in Advent Technology (IJRAT), Volume 2 Number 1- 2014 pp. 366–373.

    Google Scholar 

  24. Young-Cheol Choi, Student Member, Hyo-Sung Ahn “A Survey on Multi-Agent Reinforcement Learning: Coordination Problems”, IEEE/ASME International Conference on Mechatronics Embedded Systems and Applications, pp. 81–86, 2010.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Deepak A. Vidhate .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Vidhate, D.A., Kulkarni, P. (2018). Intelligent Traffic Control by Multi-agent Cooperative Q Learning (MCQL). In: Bhalla, S., Bhateja, V., Chandavale, A., Hiwale, A., Satapathy, S. (eds) Intelligent Computing and Information and Communication. Advances in Intelligent Systems and Computing, vol 673. Springer, Singapore. https://doi.org/10.1007/978-981-10-7245-1_47

Download citation

  • DOI: https://doi.org/10.1007/978-981-10-7245-1_47

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-10-7244-4

  • Online ISBN: 978-981-10-7245-1

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics