Poster
|
Fri 16:30
|
Aligner-Encoders: Self-Attention Transformers Can Be Self-Transducers
Adam Stooke · Rohit Prabhavalkar · Khe Sim · Pedro Moreno Mengibar
|
|
Workshop
|
|
MOTIFNet: Automating the Analysis of Amphiphile and Block Polymer Self-Assembly from SAXS Data
Daoyuan Li · Shuquan Cui · Mahesh Mahanthappa · Frank Bates · Timothy Lodge · Joern Ilja Siepmann
|
|
Workshop
|
|
Self-Attention Limits Working Memory Capacity of Transformer-Based Models
Dongyu Gong · Hantao Zhang
|
|
Poster
|
Wed 16:30
|
Unveiling the Hidden Structure of Self-Attention via Kernel Principal Component Analysis
Rachel S.Y. Teo · Tan Nguyen
|
|
Poster
|
Thu 16:30
|
Are Self-Attentions Effective for Time Series Forecasting?
Dongbin Kim · Jinseong Park · Jaewook Lee · Hoki Kim
|
|
Poster
|
Fri 11:00
|
Activating Self-Attention for Multi-Scene Absolute Pose Regression
Miso Lee · Jihwan Kim · Jae-Pil Heo
|
|
Poster
|
Fri 11:00
|
StoryDiffusion: Consistent Self-Attention for Long-Range Image and Video Generation
Yupeng Zhou · Daquan Zhou · Ming-Ming Cheng · Jiashi Feng · Qibin Hou
|
|
Poster
|
Wed 16:30
|
Graph Convolutions Enrich the Self-Attention in Transformers!
Jeongwhan Choi · Hyowon Wi · Jayoung Kim · Yehjin Shin · Kookjin Lee · Nathaniel Trask · Noseong Park
|
|
Poster
|
Wed 11:00
|
Bootstrapping Top-down Information for Self-modulating Slot Attention
Dongwon Kim · Seoyeon Kim · Suha Kwak
|
|
Poster
|
|
Faster Neighborhood Attention: Reducing the O(n^2) Cost of Self Attention at the Threadblock Level
Ali Hassani · Wen-Mei Hwu · Humphrey Shi
|
|
Workshop
|
|
Modern Hopfield Networks meet Encoded Neural Representations - Addressing Practical Considerations
Satyananda Kashyap · Niharika DSouza · Luyao Shi · Ken C. L. Wong · Hongzhi Wang · Tanveer Syeda-Mahmood
|
|
Workshop
|
|
Modern Hopfield Networks meet Encoded Neural Representations - Addressing Practical Considerations
Satyananda Kashyap · Niharika DSouza · Luyao Shi · Ken C. L. Wong · Hongzhi Wang · Tanveer Syeda-Mahmood
|
|