Skip to yearly menu bar Skip to main content


Reinforcement Learning for Solving the Vehicle Routing Problem

MohammadReza Nazari · Afshin Oroojlooy · Lawrence Snyder · Martin Takac

Room 517 AB #113

Keywords: [ Combinatorial Optimization ] [ Reinforcement Learning and Planning ] [ Reinforcement Learning ]


We present an end-to-end framework for solving the Vehicle Routing Problem (VRP) using reinforcement learning. In this approach, we train a single policy model that finds near-optimal solutions for a broad range of problem instances of similar size, only by observing the reward signals and following feasibility rules. We consider a parameterized stochastic policy, and by applying a policy gradient algorithm to optimize its parameters, the trained model produces the solution as a sequence of consecutive actions in real time, without the need to re-train for every new problem instance. On capacitated VRP, our approach outperforms classical heuristics and Google's OR-Tools on medium-sized instances in solution quality with comparable computation time (after training). We demonstrate how our approach can handle problems with split delivery and explore the effect of such deliveries on the solution quality. Our proposed framework can be applied to other variants of the VRP such as the stochastic VRP, and has the potential to be applied more generally to combinatorial optimization problems

Live content is unavailable. Log in and register to view live content