This research proposes a procedure for identifying dynamic routing policies in stochastic transportation networks. It addresses the problem of maximizing the probability of arriving on time. Given a current location (node), the goal is to identify the next node to visit so that the probability of arriving at the destination by time t or sooner is maximized, given the probability density functions for the link travel times. The Bellman principle of optimality is applied to formulate the mathematical model of this problem. The unknown functions describing the maximum probability of arriving on time are estimated accurately for a few sample networks by using the Picard method of successive approximations. The maximum probabilities can be evaluated without enumerating the network paths. The Laplace transform and its numerical inversion are introduced to reduce the computational cost of evaluating the convolution integrals that result from the successive approximation procedure.
KeywordsOptimal routing stochastic shortest path problems dynamic programming convolution integrals
Unable to display preview. Download preview PDF.
- 3.Howard, R.A. 1971Dynamic Probabilistic SystemsJohn Wiley and SonsNew York, NYGoogle Scholar
- 4.Hall, R.W. 1986The Fastest Path through a Network with Random Time-Dependent Travel TimesTransportation Science20182188Google Scholar
- 5.Fu, L., Rilett, L.R. 1998Expected Shortest Paths in Dynamic Stochastic Traffic NetworksTransportation Research32B499516Google Scholar
- 11.Bellman, R.E., Kalaba, R.E. 1965Dynamic Programming and Modern Control TheoryMcGraw-HillNew York, NYGoogle Scholar