Skip to yearly menu bar Skip to main content

Workshop: Generalization in Planning (GenPlan '23)

Robust Driving Across Scenarios via Multi-residual Task Learning

Vindula Jayawardana · Sirui Li · Cathy Wu · Yashar Farid · Kentaro Oguchi

Keywords: [ autonomous driving ] [ Reinforcement Learning ] [ eco-driving ] [ generalization ]


Conventional control, such as model-based control, is commonly utilized in autonomous driving due to its efficiency and reliability. However, real-world autonomous driving contends with a multitude of diverse traffic scenarios that are challenging for these planning algorithms. Model-free Deep Reinforcement Learning (DRL) presents a promising avenue in this direction, but learning DRL control policies that generalize to multiple traffic scenarios is still a challenge. To address this, we introduce Multi-residual Task Learning (MRTL), a generic learning framework based on multi-task learning that, for a set of task scenarios, decomposes the control into nominal components that are effectively solved by conventional control methods and residual terms which are solved using learning. We employ MRTL for fleet-level emission reduction in mixed traffic using autonomous vehicles as a means of system control. By analyzing the performance of MRTL across nearly 600 signalized intersections and 1200 traffic scenarios, we demonstrate that it emerges as a promising approach to synergize the strengths of DRL and conventional methods in generalizable control.

Chat is not available.