Workshop
|
|
PORTAL: Scalable Tabular Foundation Models via Content-Specific Tokenization
Marco Spinaci · Marek Polewczyk · Johannes Hoffart · Markus Kohler · Sam Thelin · Tassilo Klein
|
|
Poster
|
Fri 11:00
|
Does Video-Text Pretraining Help Open-Vocabulary Online Action Detection?
qingsong zhao · Yi Wang · Jilan Xu · Yinan He · Zifan Song · Limin Wang · Yu Qiao · Cairong Zhao
|
|
Poster
|
|
Long-tailed Object Detection Pretraining: Dynamic Rebalancing Contrastive Learning with Dual Reconstruction
Chen-Long Duan · Yong Li · Xiu-Shen Wei · Lin Zhao
|
|
Workshop
|
Sat 13:00
|
Panel: On Linear Representations and Pretraining Data Frequency in Language Models When Attention Sink Emerges in Language Models: An Empirical View Common Functional Decompositions Can Mis-attribute Differences in Outcomes Between Populations U-shape
|
|
Poster
|
Thu 11:00
|
Data-Efficient Operator Learning via Unsupervised Pretraining and In-Context Learning
Wuyang Chen · Jialin Song · Pu Ren · Shashank Subramanian · Dmitriy Morozov · Michael Mahoney
|
|
Poster
|
Wed 11:00
|
No "Zero-Shot" Without Exponential Data: Pretraining Concept Frequency Determines Multimodal Model Performance
Vishaal Udandarao · Ameya Prabhu · Adhiraj Ghosh · Yash Sharma · Philip Torr · Adel Bibi · Samuel Albanie · Matthias Bethge
|
|