Skip to main content

Real-Time Lane Configuration with Coordinated Reinforcement Learning

  • Conference paper
  • First Online:

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 12460))

Abstract

Changing lane configuration of roads, based on traffic patterns, is a proven solution for improving traffic throughput. Traditional lane-direction configuration solutions assume pre-known traffic patterns, hence are not suitable for real-world applications as they are not able to adapt to changing traffic conditions. We propose a dynamic lane configuration solution for improving traffic flow using a two-layer, multi-agent architecture, named Coordinated Learning-based Lane Allocation (CLLA). At the bottom-layer, a set of reinforcement learning agents find a suitable configuration of lane-directions around individual road intersections. The lane-direction changes proposed by the reinforcement learning agents are then coordinated by the upper level agents to reduce the negative impact of the changes on other parts of the road network. CLLA is the first work that allows city-wide lane configuration while adapting to changing traffic conditions. Our experimental results show that CLLA can reduce the average travel time in congested road networks by 20% compared to an uncoordinated reinforcement learning approach.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Notes

  1. 1.

    https://www1.nyc.gov/site/tlc/about/tlc-trip-record-data.page.

  2. 2.

    https://www.openstreetmap.org.

References

  1. Arel, I., Liu, C., Urbanik, T., Kohls, A.: Reinforcement learning-based multi-agent system for network traffic signal control. IET Intel. Transport Syst. 4(2), 128–135 (2010)

    Article  Google Scholar 

  2. Aslani, M., Seipel, S., Mesgari, M.S., Wiering, M.: Traffic signal optimization through discrete and continuous reinforcement learning with robustness analysis in downtown Tehran. Adv. Eng. Inform. 38, 639–655 (2018)

    Article  Google Scholar 

  3. Boutilier, C.: Planning, learning and coordination in multiagent decision processes. In: Theoretical Aspects of Rationality and Knowledge, pp. 195–210 (1996)

    Google Scholar 

  4. Chu, K.F., Lam, A.Y.S., Li, V.O.K.: Dynamic lane reversal routing and scheduling for connected autonomous vehicles. In: International Smart Cities Conference, pp. 1–6 (2017)

    Google Scholar 

  5. El-Tantawy, S., Abdulhai, B.: Multi-agent reinforcement learning for integrated network of adaptive traffic signal controllers. ITSC 14(3), 1140–1150 (2012)

    Google Scholar 

  6. Fleischer, L., Skutella, M.: Quickest flows over time. SIAM J. Comput. 36(6), 1600–1630 (2007)

    Article  MathSciNet  Google Scholar 

  7. Ford, L.R., Fulkerson, D.R.: Constructing maximal dynamic flows from static flows. Oper. Res. 6(3), 419–433 (1958)

    Article  MathSciNet  Google Scholar 

  8. Guestrin, C., Lagoudakis, M.G., Parr, R.: Coordinated reinforcement learning. In: International Conference on Machine Learning, pp. 227–234 (2002)

    Google Scholar 

  9. Hausknecht, M., Au, T., Stone, P., Fajardo, D., Waller, T.: Dynamic lane reversal in traffic management. In: ITSC, pp. 1929–1934 (2011)

    Google Scholar 

  10. Köhler, E., Möhring, R.H., Skutella, M.: Traffic networks and flows over time, pp. 166–196 (2009)

    Google Scholar 

  11. Lambert, L., Wolshon, B.: Characterization and comparison of traffic flow on reversible roadways. J. Adv. Transp. 44(2), 113–122 (2010)

    Article  Google Scholar 

  12. Levin, M.W., Boyles, S.D.: A cell transmission model for dynamic lane reversal with autonomous vehicles. Transp. Res. Part C: Emerg. Technol. 68, 126–143 (2016)

    Article  Google Scholar 

  13. Mannion, P., Duggan, J., Howley, E.: An experimental review of reinforcement learning algorithms for adaptive traffic signal control. In: McCluskey, T.L., Kotsialos, A., Müller, J.P., Klügl, F., Rana, O., Schumann, R. (eds.) Autonomic Road Transport Support Systems. AS, pp. 47–66. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-25808-9_4

    Chapter  Google Scholar 

  14. Narla, S.R.: The evolution of connected vehicle technology: from smart drivers to smart cars to... self-driving cars. ITE J. 83, 22–26 (2013)

    Google Scholar 

  15. Ramamohanarao, K., et al.: SMARTS: scalable microscopic adaptive road traffic simulator. ACM Trans. Intell. Syst. Technol. 8(2), 1–22 (2016)

    Article  Google Scholar 

  16. Ravishankar, N.R., Vijayakumar, M.V.: Reinforcement learning algorithms: survey and classification. Indian J. Sci. Technol. 10(1), 1–8 (2017)

    Article  Google Scholar 

  17. Sutton, R.S., Barto, A.G.: Introduction to Reinforcement Learning, vol. 135, 1st edn. MIT Press, Cambridge (1998)

    MATH  Google Scholar 

  18. Walraven, E., Spaan, M.T., Bakker, B.: Traffic flow optimization: a reinforcement learning approach. Eng. Appl. Artif. Intell. 52, 203–212 (2016)

    Article  Google Scholar 

  19. Watkins, C.J., Dayan, P.: Technical note: Q-learning. Mach. Learn. 8, 279–292 (1992)

    Article  MATH  Google Scholar 

  20. Wolshon, B., Lambert, L.: Planning and operational practices for reversible roadways. ITE J. 76, 38–43 (2006)

    Google Scholar 

  21. Wu, J.J., Sun, H.J., Gao, Z.Y., Zhang, H.Z.: Reversible lane-based traffic network optimization with an advanced traveller information system. Eng. Optim. 41(1), 87–97 (2009)

    Article  Google Scholar 

  22. Yau, K.L.A., Qadir, J., Khoo, H.L., Ling, M.H., Komisarczuk, P.: A survey on reinforcement learning models and algorithms for traffic signal control. ACM Comput. Surv. (CSUR) 50(3), 1–38 (2017)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Udesh Gunarathna .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Gunarathna, U., Xie, H., Tanin, E., Karunasekara, S., Borovica-Gajic, R. (2021). Real-Time Lane Configuration with Coordinated Reinforcement Learning. In: Dong, Y., Mladenić, D., Saunders, C. (eds) Machine Learning and Knowledge Discovery in Databases: Applied Data Science Track. ECML PKDD 2020. Lecture Notes in Computer Science(), vol 12460. Springer, Cham. https://doi.org/10.1007/978-3-030-67667-4_18

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-67667-4_18

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-67666-7

  • Online ISBN: 978-3-030-67667-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics