firstbacksecondback
18 Results
Workshop
|
SE(3)-equivariant self-attention via invariant features Nan Chen · Soledad Villar |
||
Poster
|
Wed 14:00 |
Transformers from an Optimization Perspective Yongyi Yang · zengfeng Huang · David P Wipf |
|
Poster
|
Thu 14:00 |
Exponential Separations in Symmetric Neural Networks Aaron Zweig · Joan Bruna |
|
Poster
|
Wed 9:00 |
Orthogonal Transformer: An Efficient Vision Transformer Backbone with Token Orthogonalization Huaibo Huang · Xiaoqiang Zhou · Ran He |
|
Poster
|
Rethinking Alignment in Video Super-Resolution Transformers Shuwei Shi · Jinjin Gu · Liangbin Xie · Xintao Wang · Yujiu Yang · Chao Dong |
||
Workshop
|
Faster Attention Is What You Need: A Fast Self-Attention Neural Network Backbone Architecture for the Edge via Double-Condensing Attention Condensers Alexander Wong · Mohammad Javad Shafiee · Saad Abbasi · Saeejith Nair · Mahmoud Famouri |
||
Poster
|
Geodesic Self-Attention for 3D Point Clouds Zhengyu Li · XUAN TANG · Zihao Xu · Xihao Wang · Hui Yu · Mingsong Chen · xian wei |
||
Workshop
|
Fri 0:10 |
Bi-Directional Self-Attention for Vision Transformers George Stoica · Taylor Hearn · Bhavika Devnani · Judy Hoffman |
|
Poster
|
Thu 9:00 |
AD-DROP: Attribution-Driven Dropout for Robust Language Model Fine-Tuning Tao Yang · JInghao Deng · Xiaojun Quan · Qifan Wang · Shaoliang Nie |
|
Poster
|
Thu 9:00 |
Recurrent Memory Transformer Aydar Bulatov · Yury Kuratov · Mikhail Burtsev |
|
Poster
|
Wed 9:00 |
Focal Modulation Networks Jianwei Yang · Chunyuan Li · Xiyang Dai · Jianfeng Gao |
|
Poster
|
Tue 9:00 |
So3krates: Equivariant attention for interactions on arbitrary length-scales in molecular systems Thorben Frank · Oliver Unke · Klaus-Robert Müller |