Skip to yearly menu bar Skip to main content


Oral
in
Workshop: Foundation Models for Decision Making

Agnostic Architecture for Heterogeneous Multi-Environment Reinforcement Learning

Kukjin Kim · Changhee Joo

[ ] [ Project Page ]
 
presentation: Foundation Models for Decision Making
Fri 15 Dec 6:15 a.m. PST — 3:30 p.m. PST

Abstract:

Training an agent from scratch for every new environment is inefficient in reinforcement learning. Suppose the agent can learn across diverse environments in any form, acquire a wealth of prior knowledge, and effectively perform transfer learning. In that case, it can save both computational and temporal costs significantly. However, performing pretraining and transfer learning across multiple environments is challenging due to differences in states and actions among RL problems. One can employ parameter sharing for environment-specific architecture with layers handling each different state-action space and add new layers for each new environment to enable transfer learning. While this approach performs well in pretraining. However, we found it lacks efficacy in transfer learning. To address these issues, we introduce a flexible and agnostic architecture capable of learning across multiple environments simultaneously and enabling transfer learning when new environments are coming. For this architecture, we propose algorithms that make multi-environment training more efficient for online and offline RL settings.

Chat is not available.