Workshop
|
|
Uncertainty-Driven Pessimistic Q-Ensemble for Offline-to-Online Reinforcement Learning
Ingook Jang · Seonghyun Kim
|
|
Workshop
|
|
Sparse Q-Learning: Offline Reinforcement Learning with Implicit Value Regularization
Haoran Xu · Li Jiang · Li Jianxiong · Zhuoran Yang · Zhaoran Wang · Xianyuan Zhan
|
|
Workshop
|
|
Offline evaluation in RL: soft stability weighting to combine fitted Q-learning and model-based methods
Briton Park · Xian Wu · Bin Yu · Angela Zhou
|
|
Workshop
|
|
CLaP: Conditional Latent Planners for Offline Reinforcement Learning
Harry Shin · Rose Wang
|
|
Workshop
|
|
Benchmarking Offline Reinforcement Learning Algorithms for E-Commerce Order Fraud Evaluation
Soysal Degirmenci · Christopher S Jones
|
|
Workshop
|
|
Raisin: Residual Algorithms for Versatile Offline Reinforcement Learning
Braham Snyder · Yuke Zhu
|
|
Workshop
|
|
Offline Robot Reinforcement Learning with Uncertainty-Guided Human Expert Sampling
Ashish Kumar · Ilya Kuzovkin
|
|
Workshop
|
|
Offline Robot Reinforcement Learning with Uncertainty-Guided Human Expert Sampling
Ashish Kumar · Ilya Kuzovkin
|
|
Poster
|
Tue 14:00
|
On Gap-dependent Bounds for Offline Reinforcement Learning
Xinqi Wang · Qiwen Cui · Simon Du
|
|
Poster
|
Wed 9:00
|
Towards Learning Universal Hyperparameter Optimizers with Transformers
Yutian Chen · Xingyou Song · Chansoo Lee · Zi Wang · Richard Zhang · David Dohan · Kazuya Kawakami · Greg Kochanski · Arnaud Doucet · Marc'Aurelio Ranzato · Sagi Perel · Nando de Freitas
|
|
Poster
|
Thu 14:00
|
Bellman Residual Orthogonalization for Offline Reinforcement Learning
Andrea Zanette · Martin J Wainwright
|
|
Workshop
|
|
Can Active Sampling Reduce Causal Confusion in Offline Reinforcement Learning?
Gunshi Gupta · Tim G. J. Rudner · Rowan McAllister · Adrien Gaidon · Yarin Gal
|
|