Skip to yearly menu bar Skip to main content


Poster session
in
Workshop: OPT 2021: Optimization for Machine Learning

Poster Session 2 (gather.town)

Wenjie Li · Akhilesh Soni · Jinwuk Seok · Jianhao Ma · Jeffery Kline · Mathieu Tuli · Miaolan Xie · Robert Gower · Quanqi Hu · Matteo Cacciola · Yuanlu Bai · Boyue Li · Wenhao Zhan · Shentong Mo · Junhyung Lyle Kim · Sajad Fathi Hafshejani · Chris Junchi Li · Zhishuai Guo · Harshvardhan Harshvardhan · Neha Wadia · Tatjana Chavdarova · Difan Zou · Zixiang Chen · Aman Gupta · Jacques Chen · Betty Shea · Benoit Dherin · Aleksandr Beznosikov


Abstract:

Please join us in gather.town (see link above). To see the abstracts of the posters presented in this session, please see below the schedule.

Authors/papers presenting posters in gather.town for this session:

  • Optimum-statistical Collaboration Towards Efficient Black-box Optimization, Wenjie Li

  • Integer Programming Approaches To Subspace Clustering With Missing Data, Akhilesh Soni

  • Stochastic Learning Equation using Monotone Increasing Resolution of Quantization, Jinwuk Seok

  • Sign-RIP: A Robust Restricted Isometry Property for Low-rank Matrix Recovery, Jianhao Ma

  • Farkas' Theorem of the Alternative for Prior Knowledge in Deep Networks, Jeffery Kline

  • Towards Robust and Automatic Hyper-Parameter Tunning, Mahdi S. Hosseini

  • High Probability Step Size Lower Bound for Adaptive Stochastic Optimization, Miaolan Xie

  • Stochastic Polyak Stepsize with a Moving Target, Robert M Gower

  • A Stochastic Momentum Method for Min-max Bilevel Optimization, Quanqi Hu

  • Deep Neural Networks pruning via the Structured Perspective Regularization, Matteo Cacciola

  • Efficient Calibration of Multi-Agent Market Simulators from Time Series with Bayesian Optimization, Yuanlu Bai

  • DESTRESS: Computation-Optimal and Communication-Efficient Decentralized Nonconvex Finite-Sum Optimization, Boyue Li

  • Policy Mirror Descent for Regularized RL: A Generalized Framework with Linear Convergence, Shicong Cen

  • Simulated Annealing for Neural Architecture Search, Shentong Mo

  • Acceleration and Stability of Stochastic Proximal Point Algorithm, Junhyung Lyle Kim

  • Barzilai and Borwein conjugate gradient method equipped with a non-monotone line search technique, Sajad Fathi Hafshejani

  • On the convergence of stochastic extragradient for bilinear games using restarted iteration averaging, Chris Junchi Li

  • Practice-Consistent Analysis of Adam-Style Methods, Zhishuai Guo

  • Escaping Local Minima With Stochastic Noise, Harsh Vardhan

  • Optimization with Adaptive Step Size Selection from a Dynamical Systems Perspective, Neha S Wadia

  • Last-Iterate Convergence of Saddle Point Optimizers via High-Resolution Differential Equations, Tatjana Chavdarova

  • Understanding the Generalization of Adam in Learning Neural Networks with Proper Regularization, Difan Zou

  • Faster Perturbed Stochastic Gradient Methods for Finding Local Minima, Zixiang Chen

  • Adam vs. SGD: Closing the generalization gap on image classification, Aman Gupta

  • Heavy-tailed noise does not explain the gap between SGD and Adam on Transformers, Frederik Kunstner

  • Faster Quasi-Newton Methods for Linear Composition Problems, Betty Shea

  • The Geometric Occam Razor Implicit in Deep Learning, Benoit Dherin

  • Random-reshuffled SARAH does not need a full gradient computations, Aleksandr Beznosikov