Timezone: »
Generalization has been one of the major challenges for learning dynamics models in model-based reinforcement learning. However, previous work on action-conditioned dynamics prediction focuses on learning the pixel-level motion and thus does not generalize well to novel environments with different object layouts. In this paper, we present a novel object-oriented framework, called object-oriented dynamics predictor (OODP), which decomposes the environment into objects and predicts the dynamics of objects conditioned on both actions and object-to-object relations. It is an end-to-end neural network and can be trained in an unsupervised manner. To enable the generalization ability of dynamics learning, we design a novel CNN-based relation mechanism that is class-specific (rather than object-specific) and exploits the locality principle. Empirical results show that OODP significantly outperforms previous methods in terms of generalization over novel environments with various object layouts. OODP is able to learn from very few environments and accurately predict dynamics in a large number of unseen environments. In addition, OODP learns semantically and visually interpretable dynamics models.
Author Information
Guangxiang Zhu (Tsinghua university)
Zhiao Huang (IIIS, Tsinghua University)
Chongjie Zhang (Tsinghua University)
More from the Same Authors
-
2022 : Multi-Agent Policy Transfer via Task Relationship Modeling »
Rong-Jun Qin · Feng Chen · Tonghan Wang · Lei Yuan · Xiaoran Wu · Yipeng Kang · Zongzhang Zhang · Chongjie Zhang · Yang Yu -
2022 : Model and Method: Training-Time Attack for Cooperative Multi-Agent Reinforcement Learning »
Siyang Wu · Tonghan Wang · Xiaoran Wu · Jingfeng ZHANG · Yujing Hu · Changjie Fan · Chongjie Zhang -
2022 Spotlight: CUP: Critic-Guided Policy Reuse »
Jin Zhang · Siyuan Li · Chongjie Zhang -
2022 Spotlight: RORL: Robust Offline Reinforcement Learning via Conservative Smoothing »
Rui Yang · Chenjia Bai · Xiaoteng Ma · Zhaoran Wang · Chongjie Zhang · Lei Han -
2022 Spotlight: Lightning Talks 5A-1 »
Yao Mu · Jin Zhang · Haoyi Niu · Rui Yang · Mingdong Wu · Ze Gong · shubham sharma · Chenjia Bai · Yu ("Tony") Zhang · Siyuan Li · Yuzheng Zhuang · Fangwei Zhong · Yiwen Qiu · Xiaoteng Ma · Fei Ni · Yulong Xia · Chongjie Zhang · Hao Dong · Ming Li · Zhaoran Wang · Bin Wang · Chongjie Zhang · Jianyu Chen · Guyue Zhou · Lei Han · Jianming HU · Jianye Hao · Xianyuan Zhan · Ping Luo -
2022 Spotlight: Non-Linear Coordination Graphs »
Yipeng Kang · Tonghan Wang · Qianlan Yang · Chongjie Zhang -
2021 Poster: Episodic Multi-agent Reinforcement Learning with Curiosity-driven Exploration »
Lulu Zheng · Jiarui Chen · Jianhao Wang · Jiamin He · Yujing Hu · Yingfeng Chen · Changjie Fan · Yang Gao · Chongjie Zhang -
2021 Poster: On the Estimation Bias in Double Q-Learning »
Zhizhou Ren · Guangxiang Zhu · Hao Hu · Beining Han · Jianglun Chen · Chongjie Zhang -
2021 Poster: Model-Based Reinforcement Learning via Imagination with Derived Memory »
Yao Mu · Yuzheng Zhuang · Bin Wang · Guangxiang Zhu · Wulong Liu · Jianyu Chen · Ping Luo · Shengbo Li · Chongjie Zhang · Jianye Hao -
2021 Poster: Offline Reinforcement Learning with Reverse Model-based Imagination »
Jianhao Wang · Wenzhe Li · Haozhe Jiang · Guangxiang Zhu · Siyuan Li · Chongjie Zhang -
2021 Poster: Towards Understanding Cooperative Multi-Agent Q-Learning with Value Factorization »
Jianhao Wang · Zhizhou Ren · Beining Han · Jianing Ye · Chongjie Zhang -
2021 Poster: Celebrating Diversity in Shared Multi-Agent Reinforcement Learning »
Chenghao Li · Tonghan Wang · Chengjie Wu · Qianchuan Zhao · Jun Yang · Chongjie Zhang -
2020 Poster: Bridging Imagination and Reality for Model-Based Deep Reinforcement Learning »
Guangxiang Zhu · Minghao Zhang · Honglak Lee · Chongjie Zhang -
2019 Poster: Hierarchical Reinforcement Learning with Advantage-Based Auxiliary Rewards »
Siyuan Li · Rui Wang · Minxue Tang · Chongjie Zhang -
2017 Poster: Associative Embedding: End-to-End Learning for Joint Detection and Grouping »
Alejandro Newell · Zhiao Huang · Jia Deng