Processing math: 100%
Skip to yearly menu bar Skip to main content


Search All 2024 Events
 

1252 Results

<<   <   Page 2 of 105   >   >>
Workshop
Fact or Fiction? Can LLMs be Reliable Annotators for Political Truths?
Veronica Chatrath · Marcelo Lotif · Shaina Raza
Workshop
Sun 14:15 Are Large-Language Models Graph Algorithmic Reasoners?
Alexander Taylor · Anthony Cuturrufo · Vishal Yathish · Mingyu Derek Ma · Wei Wang
Workshop
Sun 12:00 Are Large-Language Models Graph Algorithmic Reasoners?
Alexander Taylor · Anthony Cuturrufo · Vishal Yathish · Mingyu Derek Ma · Wei Wang
Workshop
Language Models Resist Alignment
Jiaming Ji · Kaile Wang · Tianyi (Alex) Qiu · Boyuan Chen · Changye Li · Hantao Lou · Jiayi Zhou · Juntao Dai · Yaodong Yang
Workshop
Sat 15:45 LLMs for Causal Inference
Jonathan Choi
Workshop
Do LLMs internally know'' when they follow instructions?
Juyeon Heo · Christina Heinze-Deml · Shirley Ren · Oussama Elachqar · Udhyakumar Nallasamy · Andy Miller · Jaya Narain
Workshop
Sun 10:45 Contributed Talk 1: iART - Imitation guided Automated Red Teaming
Sajad Mousavi · Desik Rengarajan · Ashwin Ramesh Babu · Vineet Gundecha · Avisek Naug · Sahand Ghorbanpour · Ricardo Luna Gutierrez · Antonio Guillen-Perez · Paolo Faraboschi · Soumyendu Sarkar
Workshop
iART - Imitation guided Automated Red Teaming
Sajad Mousavi · Desik Rengarajan · Ashwin Ramesh Babu · Vineet Gundecha · Avisek Naug · Sahand Ghorbanpour · Ricardo Luna Gutierrez · Antonio Guillen-Perez · Paolo Faraboschi · Soumyendu Sarkar
Workshop
Imitation Guided Automated Red Teaming
Sajad Mousavi · Desik Rengarajan · Ashwin Ramesh Babu · Vineet Gundecha · Antonio Guillen-Perez · Ricardo Luna Gutierrez · Avisek Naug · Sahand Ghorbanpour · Soumyendu Sarkar
Workshop
Sat 15:45 Functional-level Uncertainty Quantification for Calibrated Fine-tuning on LLMs
Ruijia Niu · Dongxia Wu · Rose Yu · Yian Ma
Workshop
Advancing NLP Security by Leveraging LLMs as Adversarial Engines
Sudarshan Srinivasan · Maria Mahbub · Amir Sadovnik
Workshop
Formal Theorem Proving by Rewarding LLMs to Decompose Proofs Hierarchically
Kefan Dong · Arvind Mahankali · Tengyu Ma