Skip to yearly menu bar Skip to main content


Workshop

Bayesian optimization, experimental design and bandits: Theory and applications

Nando de Freitas · Roman Garnett · Frank R Hutter · Michael A Osborne

Melia Sierra Nevada: Hotel Bar

Thu 15 Dec, 10:30 p.m. PST

Recently, we have witnessed many important advances in learning approaches for sequential decision making. These advances have occurred in different communities, who refer to the problem using different terminology: Bayesian optimization, experimental design, bandits ($$x$$-armed bandits, contextual bandits, Gaussian process bandits), active sensing, personalized recommender systems, automatic algorithm configuration, reinforcement learning and so on. These communities tend to use different methodologies too. Some focus more on practical performance while others are more concerned with theoretical aspects of the problem. As a result, they have derived and engineered a diverse range of methods for trading off exploration and exploitation in learning. For these reasons, it is timely and important to bring these communities together to identify differences and commonalities, to propose common benchmarks, to review the many practical applications (interactive user interfaces, automatic tuning of parameters and architectures, robotics, recommender systems, active vision, and more), to narrow the gap between theory and practice and to identify strategies for attacking high dimensionality.

Live content is unavailable. Log in and register to view live content