Timezone: »

 
Poster
Hybrid Models for Learning to Branch
Prateek Gupta · Maxime Gasse · Elias Khalil · Pawan K Mudigonda · Andrea Lodi · Yoshua Bengio

Wed Dec 09 09:00 AM -- 11:00 AM (PST) @ Poster Session 3 #901

A recent Graph Neural Network (GNN) approach for learning to branch has been shown to successfully reduce the running time of branch-and-bound algorithms for Mixed Integer Linear Programming (MILP). While the GNN relies on a GPU for inference, MILP solvers are purely CPU-based. This severely limits its application as many practitioners may not have access to high-end GPUs. In this work, we ask two key questions. First, in a more realistic setting where only a CPU is available, is the GNN model still competitive? Second, can we devise an alternate computationally inexpensive model that retains the predictive power of the GNN architecture? We answer the first question in the negative, and address the second question by proposing a new hybrid architecture for efficient branching on CPU machines. The proposed architecture combines the expressive power of GNNs with computationally inexpensive multi-layer perceptrons (MLP) for branching. We evaluate our methods on four classes of MILP problems, and show that they lead to up to 26% reduction in solver running time compared to state-of-the-art methods without a GPU, while extrapolating to harder problems than it was trained on. The code for this project is publicly available at https://github.com/pg2455/Hybrid-learn2branch.

Author Information

Prateek Gupta (University of Oxford)
Maxime Gasse (Polytechnique Montréal)

I am a machine learning researcher within the Data Science for Real-Time Decision Making Canada Excellence Research Chair (CERC), and also part of the MILA research institute on artificial intelligence in Montréal, Canada. The question that motivates my research is: can machines think? My broad research interests include: - probabilistic graphical models and their theoretical properties (my PhD Thesis) - structured prediction, in particular multi-label classification - combinatorial optimization using machine learning (see our Ecole library) - causality, specifically in the context of reinforcement learning

Elias Khalil (University of Toronto)
Pawan K Mudigonda (University of Oxford)
Andrea Lodi (École Polytechnique Montréal)
Yoshua Bengio (Mila / U. Montreal)

Yoshua Bengio is Full Professor in the computer science and operations research department at U. Montreal, scientific director and founder of Mila and of IVADO, Turing Award 2018 recipient, Canada Research Chair in Statistical Learning Algorithms, as well as a Canada AI CIFAR Chair. He pioneered deep learning and has been getting the most citations per day in 2018 among all computer scientists, worldwide. He is an officer of the Order of Canada, member of the Royal Society of Canada, was awarded the Killam Prize, the Marie-Victorin Prize and the Radio-Canada Scientist of the year in 2017, and he is a member of the NeurIPS advisory board and co-founder of the ICLR conference, as well as program director of the CIFAR program on Learning in Machines and Brains. His goal is to contribute to uncover the principles giving rise to intelligence through learning, as well as favour the development of AI for the benefit of all.

More from the Same Authors