Timezone: »

Learning to Branch with Tree MDPs
Lara Scavuzzo · Feng Chen · Didier Chetelat · Maxime Gasse · Andrea Lodi · Neil Yorke-Smith · Karen Aardal

Wed Nov 30 09:00 AM -- 11:00 AM (PST) @ Hall J #308

State-of-the-art Mixed Integer Linear Programming (MILP) solvers combine systematic tree search with a plethora of hard-coded heuristics, such as branching rules. While approaches to learn branching strategies have received increasing attention and have shown very promising results, most of the literature focuses on learning fast approximations of the \emph{strong branching} rule. Instead, we propose to learn branching rules from scratch with Reinforcement Learning (RL). We revisit the work of Etheve et al. (2020) and propose a generalization of Markov Decisions Processes (MDP), which we call \emph{tree MDP}, that provides a more suitable formulation of the branching problem. We derive a policy gradient theorem for tree MDPs that exhibits a better credit assignment compared to its temporal counterpart. We demonstrate through computational experiments that this new framework is suitable to tackle the learning-to-branch problem in MILP, and improves the learning convergence.

Author Information

Lara Scavuzzo (TU Delft)
Feng Chen
Didier Chetelat (Polytechnique Montreal)
Maxime Gasse (Polytechnique Montréal)

I am a machine learning researcher within the Data Science for Real-Time Decision Making Canada Excellence Research Chair (CERC), and also part of the MILA research institute on artificial intelligence in Montréal, Canada. The question that motivates my research is: can machines think? My broad research interests include: - probabilistic graphical models and their theoretical properties (my PhD Thesis) - structured prediction, in particular multi-label classification - combinatorial optimization using machine learning (see our Ecole library) - causality, specifically in the context of reinforcement learning

Andrea Lodi (Polytechnique Montreal)
Neil Yorke-Smith (Delft University of Technology)
Neil Yorke-Smith

Neil Yorke-Smith directs the Socio-Technical Algorithmic Research (STAR) Lab at TU Delft. His research addresses a fundamental question of the AI era: how can technology help people make decisions in complex socio-technical situations? Yorke-Smith is an Associate Professor in the Faculty of Electrical Engineering, Mathematics and Computer Science (EEMCS/EWI), Delft University of Technology, The Netherlands. Yorke-Smith has been a Visiting Scholar at St Edmund's College, Cambridge and at the Center for the Study of Language and Information, Stanford University, and a Visiting Research Fellow at the Cambridge Judge Business School and at RMIT University. Previously, Yorke-Smith held positions at the American University of Beirut, Lebanon, and SRI International, USA. The author of more than 100 scholarly publications, Yorke-Smith has consulted in Silicon Valley, Europe, and the Middle East. He holds a doctorate degree from Imperial College London. Yorke-Smith was Programme Chair of AAMAS'20 and of IAAI'21, and sits on the Editorial Boards of the journals AI, Constraints, JAAMAS and JAIR. Yorke-Smith is a Senior Member of AAAI, a Senior Member of ACM, and a member of CLAIRE and ELLIS. In addition to directing the STAR Lab, Yorke-Smith currently serves as manager of the Dutch Citizens and Society in the Energy Transition (CaSET) ELSA AI Lab.

Karen Aardal

More from the Same Authors