Skip to yearly menu bar Skip to main content


Search All 2024 Events
 

248 Results

<<   <   Page 2 of 21   >   >>
Workshop
Measuring Pre-training Data Quality without Labels for Time Series Foundation Models
Songkang Wen · Vasilii Feofanov · Jianfeng Zhang
Poster
Fri 11:00 SparseLLM: Towards Global Pruning of Pre-trained Language Models
Guangji Bai · Yijiang Li · Chen LING · Kibaek Kim · Liang Zhao
Workshop
Pre-Training Multimodal Hallucination Detectors with Corrupted Grounding Data
Spencer Whitehead · Jacob Phillips · Sean Hendryx
Workshop
Sat 10:55 Expertise-Centric Prompting Framework for Financial Tabular Data Generation using Pre-trained Large Language Models
Subin Kim · Jungmin Son · Minyoung Jung · Youngjun Kwak
Workshop
Sat 16:15 BLAP: Bootstrapping Language-Audio Pre-training for Music Captioning
Workshop
Fostering Intrinsic Motivation in Reinforcement Learning with Pre-trained Foundation Models
Alain Andres · Javier Del Ser
Workshop
The Future of Large Language Model Pre-training is Federated
Lorenzo Sani · Alexandru-Andrei Iacob · Zeyu Cao · Bill Marino · Yan Gao · Tomas Paulik · Wanru Zhao · William Shen · Preslav Aleksandrov · Xinchi Qiu · Nicholas Lane
Workshop
Sun 10:21 Lorenzo Sani, Alex Iacob, Zeyu Cao, Bill Marino, Yan Gao, Tomas Paulik, Wanru Zhao, William F. Shen, Preslav Aleksandrov, Xinchi Qiu & Nicholas Donald Lane. The Future of Large Language Model Pre-training is Federated
Poster
Thu 16:30 Extracting Training Data from Molecular Pre-trained Models
Renhong Huang · Jiarong Xu · Zhiming Yang · Xiang Si · Xin Jiang · Hanyang Yuan · Chunping Wang · YANG YANG
Poster
Wed 16:30 How does Architecture Influence the Base Capabilities of Pre-trained Language Models? A Case Study Based on FFN-Wider and MoE Transformers
Xin Lu · Yanyan Zhao · Bing Qin · Liangyu Huo · Qing Yang · Dongliang Xu
Poster
Fri 16:30 CoLoR-Filter: Conditional Loss Reduction Filtering for Targeted Language Model Pre-training
David Brandfonbrener · Hanlin Zhang · Andreas Kirsch · Jonathan Richard Schwarz · Sham Kakade
Workshop
Sat 11:00 Optimizing Data Use for Efficient Pre-training
Danqi Chen