Skip to yearly menu bar Skip to main content


Search All 2024 Events
 

45 Results

<<   <   Page 1 of 4   >   >>
Poster
Wed 16:30 Unveiling the Hidden Structure of Self-Attention via Kernel Principal Component Analysis
Rachel S.Y. Teo · Tan Nguyen
Poster
Fri 16:30 NoMAD-Attention: Efficient LLM Inference on CPUs Through Multiply-add-free Attention
Tianyi Zhang · Jonah Yi · Bowen Yao · Zhaozhuo Xu · Anshumali Shrivastava
Affinity Event
Reducing Reasoning Costs - The Path of Optimization for Chain of Thought via Sparse Attention Mechanism
Libo Wang
Poster
Thu 11:00 Rough Transformers: Lightweight and Continuous Time Series Modelling through Signature Patching
Fernando Moreno-Pino · Alvaro Arroyo · Harrison Waldon · Xiaowen Dong · Alvaro Cartea
Workshop
Transformers are Efficient Compilers, Provably
Xiyu Zhai · Runlong Zhou · Liao Zhang · Simon Du
Poster
Fri 11:00 Perceiving Longer Sequences With Bi-Directional Cross-Attention Transformers
Markus Hiller · Krista A. Ehinger · Tom Drummond
Poster
Fri 11:00 Activating Self-Attention for Multi-Scene Absolute Pose Regression
Miso Lee · Jihwan Kim · Jae-Pil Heo
Poster
Thu 16:30 On the Role of Attention Masks and LayerNorm in Transformers
Xinyi Wu · Amir Ajorlou · Yifei Wang · Stefanie Jegelka · Ali Jadbabaie
Poster
Wed 16:30 Graph Convolutions Enrich the Self-Attention in Transformers!
Jeongwhan Choi · Hyowon Wi · Jayoung Kim · Yehjin Shin · Kookjin Lee · Nathaniel Trask · Noseong Park
Poster
Wed 11:00 Single Image Reflection Separation via Dual-Stream Interactive Transformers
Qiming Hu · Hainuo Wang · Xiaojie Guo
Poster
Wed 16:30 Orchid: Flexible and Data-Dependent Convolution for Sequence Modeling
Mahdi Karami · Ali Ghodsi
Poster
Thu 16:30 Selective Attention: Enhancing Transformer through Principled Context Control
Xuechen Zhang · Xiangyu Chang · Mingchen Li · Amit Roy-Chowdhury · Jiasi Chen · Samet Oymak