Abstract
In this chapter, we model vehicle routing problems containing uncertainty and requiring stepwise planning by a (finite) Markov decision process (MDP). We define the MDP in Sect. 4.2. The MDP contains three (sub-)models: decision states, dynamic decision making, and stochastic transitions. To model routing applications as MDP, we have to determine the three (sub-)models. In Sect. 4.1, we model replanning as dynamism. In Sect. 4.4, we model a decision state for a vehicle routing’s planning situation. We model uncertainty as stochasticity. We give a short overview on how the main drivers of uncertainty generally are modeled in the literature in Sect. 4.5. We finally give an overview on how SDVRPs are modeled as MDPs in Sect. 4.6.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
Notably, in the presented definition, a deterministic problem is always static. Still, the applied solution approach may be dynamic, e.g., applied on a rolling horizon.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
Copyright information
© 2017 Springer International Publishing AG
About this chapter
Cite this chapter
Ulmer, M.W. (2017). Modeling. In: Approximate Dynamic Programming for Dynamic Vehicle Routing. Operations Research/Computer Science Interfaces Series, vol 61. Springer, Cham. https://doi.org/10.1007/978-3-319-55511-9_4
Download citation
DOI: https://doi.org/10.1007/978-3-319-55511-9_4
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-55510-2
Online ISBN: 978-3-319-55511-9
eBook Packages: Business and ManagementBusiness and Management (R0)