Skip to yearly menu bar Skip to main content


Poster

A Method for Evaluating Hyperparameter Sensitivity in Reinforcement Learning

Jacob Adkins · Michael Bowling · Adam White

West Ballroom A-D #6500
[ ]
Wed 11 Dec 4:30 p.m. PST — 7:30 p.m. PST

Abstract:

The performance of modern reinforcement learning algorithms critically relieson tuning ever increasing numbers of hyperparameters. Often, small changes ina hyperparameter can lead to drastic changes in performance, and different environments require very different hyperparameter settings to achieve state-of-the-artperformance reported in the literature. We currently lack a scalable and widelyaccepted approach to characterizing these complex interactions. This work proposes a new empirical methodology for studying, comparing, and quantifying thesensitivity of an algorithm’s performance to hyperparameter tuning for a given setof environments. We then demonstrate the utility of this methodology by assessingthe hyperparameter sensitivity of several commonly used normalization variants ofPPO. The results suggest that several algorithmic performance improvements may,in fact, be a result of an increased reliance on hyperparameter tuning.

Live content is unavailable. Log in and register to view live content