Getting Started
Schedule
Tutorials
Main Conference
Invited Talks
Orals
Papers
Spotlight Posters
Competitions
Journal Track
Creative AI Track
Outstanding Paper Awards
Workshops
Community
Affinity Events
Socials
Mentorship
Town Hall
Careers / Recruiting
Help
Presenters Instructions
Moderators Instructions
FAQ
Helpdesk in RocketChat
Organizers
Login
firstbacksecondback
Search All 2022 Events
57 Results
<<
<
Page 1 of 5
>
>>
Poster
Wed 14:00
FlashAttention: Fast and Memory-Efficient Exact Attention with IO-Awareness
Tri Dao · Dan Fu · Stefano Ermon · Atri Rudra · Christopher Ré
Workshop
Striving for data-model efficiency: Identifying data externalities on group performance
Esther Rolf · Ben Packer · Alex Beutel · Fernando Diaz
Workshop
Loop Unrolled Shallow Equilibrium Regularizer (LUSER) - A Memory-Efficient Inverse Problem Solver
Peimeng Guan · Jihui Jin · Justin Romberg · Mark Davenport
Poster
Tue 9:00
Active Surrogate Estimators: An Active Learning Approach to Label-Efficient Model Evaluation
Jannik Kossen · Sebastian Farquhar · Yarin Gal · Thomas Rainforth
Workshop
Fri 12:15
Towards Parameter-Efficient Automation of Data Wrangling Tasks with Prefix-Tuning
David Vos · Till Döhmen · Sebastian Schelter
Workshop
Towards Parameter-Efficient Automation of Data Wrangling Tasks with Prefix-Tuning
David Vos · Till Döhmen · Sebastian Schelter
Poster
Wed 9:00
Conservative Dual Policy Optimization for Efficient Model-Based Reinforcement Learning
Shenao Zhang
Poster
Efficient Graph Similarity Computation with Alignment Regularization
Wei Zhuo · Guang Tan
Workshop
Fri 11:30
Transformers are Sample-Efficient World Models
Vincent Micheli · Eloi Alonso · François Fleuret
Poster
Thu 9:00
Diagonal State Spaces are as Effective as Structured State Spaces
Ankit Gupta · Albert Gu · Jonathan Berant
Poster
Thu 9:00
Is this the Right Neighborhood? Accurate and Query Efficient Model Agnostic Explanations
Amit Dhurandhar · Karthikeyan Natesan Ramamurthy · Karthikeyan Shanmugam
Poster
Tue 9:00
Deciding What to Model: Value-Equivalent Sampling for Reinforcement Learning
Dilip Arumugam · Benjamin Van Roy
NeurIPS uses cookies to remember that you are logged in. By using our websites, you agree to the placement of cookies.
Our Privacy Policy »
Accept Cookies