Skip to yearly menu bar Skip to main content


Search All 2023 Events
 

69 Results

<<   <   Page 1 of 6   >   >>
Workshop
How Long Can Context Length of Open-Source LLMs truly Promise?
Dacheng Li · Rulin Shao · Anze Xie · Ying Sheng · Lianmin Zheng · Joseph Gonzalez · Ion Stoica · Xuezhe Ma · Hao Zhang
Poster
Thu 8:45 Revisiting Visual Model Robustness: A Frequency Long-Tailed Distribution View
Zhiyu Lin · Yifei Gao · Yunfan Yang · Jitao Sang
Workshop
Welfare Diplomacy: Benchmarking Language Model Cooperation
Gabe Mukobi · Hannah Erlebach · Niklas Lauffer · Lewis Hammond · Alan Chan · Jesse Clifton
Poster
Thu 8:45 SutraNets: Sub-series Autoregressive Networks for Long-Sequence, Probabilistic Forecasting
Shane Bergsma · Tim Zeyl · Lei Guo
Poster
Thu 15:00 Digital Typhoon: Long-term Satellite Image Dataset for the Spatio-Temporal Modeling of Tropical Cyclones
Asanobu Kitamoto · Jared Hwang · Bastien Vuillod · Lucas Gautier · Yingtao Tian · Tarin Clanuwat
Poster
Tue 15:15 Convolutional State Space Models for Long-Range Spatiotemporal Modeling
Jimmy Smith · Shalini De Mello · Jan Kautz · Scott Linderman · Wonmin Byeon
Workshop
Ring Attention with Blockwise Transformers for Near-Infinite Context
Hao Liu · Matei A Zaharia · Pieter Abbeel
Poster
Tue 15:15 ScaleLong: Towards More Stable Training of Diffusion Model via Scaling Network Long Skip Connection
Zhongzhan Huang · Pan Zhou · Pan Zhou · Shuicheng Yan · Liang Lin
Poster
Wed 15:00 UP-DP: Unsupervised Prompt Learning for Data Pre-Selection with Vision-Language Models
Xin Li · Sima Behpour · Thang Long Doan · Wenbin He · Liang Gou · Liu Ren
Poster
Tue 8:45 Zero-shot Visual Relation Detection via Composite Visual Cues from Large Language Models
Lin Li · Jun Xiao · Guikun Chen · Jian Shao · Yueting Zhuang · Long Chen
Workshop
Trained Transformers Learn Linear Models In-Context
Ruiqi Zhang · Spencer Frei · Peter Bartlett
Workshop
Trained Transformers Learn Linear Models In-Context
Ruiqi Zhang · Spencer Frei · Peter Bartlett