Session
Oral Session 1: Theory
Moderator: Nicolò Cesa-Bianchi
Separation Results between Fixed-Kernel and Feature-Learning Probability Metrics
Carles Domingo i Enrich · Youssef Mroueh
Several works in implicit and explicit generative modeling empirically observed that feature-learning discriminators outperform fixed-kernel discriminators in terms of the sample quality of the models. We provide separation results between probability metrics with fixed-kernel and feature-learning discriminators using the function classes
Near-Optimal No-Regret Learning in General Games
Constantinos Daskalakis · Maxwell Fishelson · Noah Golowich
We show that Optimistic Hedge -- a common variant of multiplicative-weights-updates with recency bias -- attains
Lower Bounds on Metropolized Sampling Methods for Well-Conditioned Distributions
Yin Tat Lee · Ruoqi Shen · Kevin Tian
We give lower bounds on the performance of two of the most popular sampling methods in practice, the Metropolis-adjusted Langevin algorithm (MALA) and multi-step Hamiltonian Monte Carlo (HMC) with a leapfrog integrator, when applied to well-conditioned distributions. Our main result is a nearly-tight lower bound of