Skip to yearly menu bar Skip to main content


Search All 2024 Events
 

37 Results

<<   <   Page 1 of 4   >   >>
Affinity Event
Enhancing Communication Efficiency and Robustness in Split-Federated Learning with Rate-Distortion inspired Compression
Chamani Shiranthika Jayakody Kankanamalage · Hadi Hadizadeh · Ivan Bajić · Parvaneh Saeedi
Workshop
Sun 11:14 Muxing Wang, Pengkun Yang & Lili Su. On the Convergence Rates of Federated Q-Learning across Heterogeneous Environments
Workshop
On the Convergence Rates of Federated Q-Learning across Heterogeneous Environments
Muxing Wang · Pengkun Yang · Lili Su
Poster
Thu 16:30 Analyzing & Reducing the Need for Learning Rate Warmup in GPT Training
Atli Kosson · Bettina Messmer · Martin Jaggi
Poster
Thu 11:00 Universal Rates for Active Learning
Steve Hanneke · Amin Karbasi · Shay Moran · Grigoris Velegkas
Poster
Wed 16:30 Why Warmup the Learning Rate? Underlying Mechanisms and Improvements
Dayal Singh Kalra · Maissam Barkeshli
Poster
Fri 16:30 Fast Rates in Stochastic Online Convex Optimization by Exploiting the Curvature of Feasible Sets
Taira Tsuchiya · Shinji Ito
Workshop
On Your Mark, Get Set, Warmup!
Dayal Singh Kalra · Maissam Barkeshli
Poster
Fri 11:00 An In-depth Investigation of Sparse Rate Reduction in Transformer-like Models
Yunzhe Hu · Difan Zou · Dong Xu
Poster
Fri 11:00 Transformers Learn to Achieve Second-Order Convergence Rates for In-Context Linear Regression
Deqing Fu · Tian-qi Chen · Robin Jia · Vatsal Sharan
Poster
Thu 11:00 Stepping on the Edge: Curvature Aware Learning Rate Tuners
Vincent Roulet · Atish Agarwala · Jean-Bastien Grill · Grzegorz Swirszcz · Mathieu Blondel · Fabian Pedregosa
Poster
Fri 16:30 Universal Rates of Empirical Risk Minimization
Steve Hanneke · Mingyue Xu