Skip to yearly menu bar Skip to main content


Poster

No Representation, No Trust: Connecting Representation, Collapse, and Trust Issues in PPO

Skander Moalla · Andrea Miele · Razvan Pascanu · Caglar Gulcehre

[ ]
Wed 11 Dec 11 a.m. PST — 2 p.m. PST

Abstract:

Reinforcement learning (RL) is inherently rife with non-stationarity since the states and rewards the agent observes during training depend on its changing policy.Therefore, networks in deep RL must be capable of adapting to new observations and fitting new targets.However, previous works have observed that networks in off-policy deep value-based methods exhibit a decrease in representation rank, often correlated with an inability to continue learning or a collapse in performance.Although this phenomenon has generally been attributed to neural network learning under non-stationarity, it has been overlooked in on-policy policy optimization methods which are often thought capable of training indefinitely.In this work, we empirically study representation dynamics in Proximal Policy Optimization (PPO) on the Atari and MuJoCo environments, revealing that PPO agents are also affected by feature rank deterioration and loss of plasticity.We show that this is aggravated with stronger non-stationarity, ultimately driving the actor's performance to collapse, regardless of the performance of the critic.We ask the question of why the trust region, specific to methods like PPO, is not able to alleviate or prevent the collapse.We find that there is a connection between representation collapse and the degradation of the trust region, one exacerbating the other, and present Proximal Feature Optimization (PFO), a novel auxiliary loss that, along with other interventions, shows that regularizing the representation dynamics improves the performance of PPO agents.

Live content is unavailable. Log in and register to view live content