`

Timezone: »

 
Poster
Optimal Gradient-based Algorithms for Non-concave Bandit Optimization
Baihe None Huang · Kaixuan Huang · Sham Kakade · Jason Lee · Qi Lei · Runzhe Wang · Jiaqi Yang

Tue Dec 07 08:30 AM -- 10:00 AM (PST) @ None #None

Bandit problems with linear or concave reward have been extensively studied, but relatively few works have studied bandits with non-concave reward. This work considers a large family of bandit problems where the unknown underlying reward function is non-concave, including the low-rank generalized linear bandit problems and two-layer neural network with polynomial activation bandit problem.For the low-rank generalized linear bandit problem, we provide a minimax-optimal algorithm in the dimension, refuting both conjectures in \cite{lu2021low,jun2019bilinear}. Our algorithms are based on a unified zeroth-order optimization paradigm that applies in great generality and attains optimal rates in several structured polynomial settings (in the dimension). We further demonstrate the applicability of our algorithms in RL in the generative model setting, resulting in improved sample complexity over prior approaches.Finally, we show that the standard optimistic algorithms (e.g., UCB) are sub-optimal by dimension factors. In the neural net setting (with polynomial activation functions) with noiseless reward, we provide a bandit algorithm with sample complexity equal to the intrinsic algebraic dimension. Again, we show that optimistic approaches have worse sample complexity, polynomial in the extrinsic dimension (which could be exponentially worse in the polynomial degree).

Author Information

Baihe None Huang (Peking University)
Kaixuan Huang (Princeton University)
Sham Kakade (Harvard University & Microsoft Research)
Jason Lee (University of Southern California)
Qi Lei (Princeton University)
Runzhe Wang (IIIS, Tsinghua University, Tsinghua University)
Jiaqi Yang (Tsinghua University)

More from the Same Authors