Workshop
|
Fri 6:30
|
Towards Safe Model-based Reinforcement Learning
Felix Berkenkamp
|
|
Workshop
|
|
Symbolic-Model-Based Reinforcement Learning
Pierre-alexandre Kamienny · Sylvain Lamprier
|
|
Workshop
|
|
Learning to Prioritize Planning Updates in Model-based Reinforcement Learning
Brad Burega · John Martin · Michael Bowling
|
|
Poster
|
Wed 9:00
|
Conservative Dual Policy Optimization for Efficient Model-Based Reinforcement Learning
Shenao Zhang
|
|
Workshop
|
|
Meta-Learning General-Purpose Learning Algorithms with Transformers
Louis Kirsch · Luke Metz · James Harrison · Jascha Sohl-Dickstein
|
|
Workshop
|
|
Domain Invariant Q-Learning for model-free robust continuous control under visual distractions
Tom Dupuis · Jaonary Rabarisoa · Quoc Cuong PHAM · David Filliat
|
|
Workshop
|
|
Learning Representations for Reinforcement Learning with Hierarchical Forward Models
Trevor McInroe · Lukas Schäfer · Stefano Albrecht
|
|
Poster
|
Tue 9:00
|
Operator Splitting Value Iteration
Amin Rakhsha · Andrew Wang · Mohammad Ghavamzadeh · Amir-massoud Farahmand
|
|
Poster
|
|
Model-Based Opponent Modeling
XiaoPeng Yu · Jiechuan Jiang · Wanpeng Zhang · Haobin Jiang · Zongqing Lu
|
|
Workshop
|
|
Offline evaluation in RL: soft stability weighting to combine fitted Q-learning and model-based methods
Briton Park · Xian Wu · Bin Yu · Angela Zhou
|
|
Poster
|
Wed 9:00
|
You Can’t Count on Luck: Why Decision Transformers and RvS Fail in Stochastic Environments
Keiran Paster · Sheila McIlraith · Jimmy Ba
|
|
Poster
|
Tue 9:00
|
Uncertainty Estimation Using Riemannian Model Dynamics for Offline Reinforcement Learning
Guy Tennenholtz · Shie Mannor
|
|