Timezone: »
Belief Propagation (BP) is an important message-passing algorithm for various reasoning tasks over graphical models, including solving the Constraint Optimization Problems (COPs). It has been shown that BP can achieve state-of-the-art performance on various benchmarks by mixing old and new messages before sending the new one, i.e., damping. However, existing methods on tuning a static damping factor for BP not only is laborious but also harms their performance. Moreover, existing BP algorithms treat each variable node's neighbors equally when composing a new message, which also limits their exploration ability. To address these issues, we seamlessly integrate BP, Gated Recurrent Units (GRUs), and Graph Attention Networks (GATs) within the massage-passing framework to reason about dynamic weights and damping factors for composing new BP messages. Our model, Deep Attentive Belief Propagation (DABP), takes the factor graph and the BP messages in each iteration as the input and infers the optimal weights and damping factors through GRUs and GATs, followed by a multi-head attention layer. Furthermore, unlike existing neural-based BP variants, we propose a novel self-supervised learning algorithm for DABP with a smoothed solution cost, which does not require expensive training labels and also avoids the common out-of-distribution issue through efficient online learning. Extensive experiments show that our model significantly outperforms state-of-the-art baselines.
Author Information
Yanchen Deng (Nanyang Technological University)
Shufeng Kong (Nanyang Technological University)
Caihua Liu (SUN YAT-SEN UNIVERSITY)
Bo An (Nanyang Technological University)
Related Events (a corresponding poster, oral, or spotlight)
-
2022 Poster: Deep Attentive Belief Propagation: Integrating Reasoning and Learning for Solving Constraint Optimization Problems »
Wed. Nov 30th through Dec 1st Room Hall J #208
More from the Same Authors
-
2022 Poster: Generalizing Consistent Multi-Class Classification with Rejection to be Compatible with Arbitrary Losses »
Yuzhou Cao · Tianchi Cai · Lei Feng · Lihong Gu · Jinjie GU · Bo An · Gang Niu · Masashi Sugiyama -
2022 Spotlight: Lightning Talks 6A-1 »
Ziyi Wang · Nian Liu · Yaming Yang · Qilong Wang · Yuanxin Liu · Zongxin Yang · Yizhao Gao · Yanchen Deng · Dongze Lian · Nanyi Fei · Ziyu Guan · Xiao Wang · Shufeng Kong · Xumin Yu · Daquan Zhou · Yi Yang · Fandong Meng · Mingze Gao · Caihua Liu · Yongming Rao · Zheng Lin · Haoyu Lu · Zhe Wang · Jiashi Feng · Zhaolin Zhang · Deyu Bo · Xinchao Wang · Chuan Shi · Jiangnan Li · Jiangtao Xie · Jie Zhou · Zhiwu Lu · Wei Zhao · Bo An · Jiwen Lu · Peihua Li · Jian Pei · Hao Jiang · Cai Xu · Peng Fu · Qinghua Hu · Yijie Li · Weigang Lu · Yanan Cao · Jianbin Huang · Weiping Wang · Zhao Cao · Jie Zhou -
2022 Poster: Alleviating "Posterior Collapse'' in Deep Topic Models via Policy Gradient »
Yewen Li · Chaojie Wang · Zhibin Duan · Dongsheng Wang · Bo Chen · Bo An · Mingyuan Zhou -
2022 Poster: Out-of-Distribution Detection with An Adaptive Likelihood Ratio on Informative Hierarchical VAE »
Yewen Li · Chaojie Wang · Xiaobo Xia · Tongliang Liu · xin miao · Bo An -
2021 Poster: RMIX: Learning Risk-Sensitive Policies for Cooperative Reinforcement Learning Agents »
Wei Qiu · Xinrun Wang · Runsheng Yu · Rundong Wang · Xu He · Bo An · Svetlana Obraztsova · Zinovi Rabinovich -
2021 Poster: Open-set Label Noise Can Improve Robustness Against Inherent Label Noise »
Hongxin Wei · Lue Tao · RENCHUNZI XIE · Bo An -
2019 Poster: Manipulating a Learning Defender and Ways to Counteract »
Jiarui Gan · Qingyu Guo · Long Tran-Thanh · Bo An · Michael Wooldridge