Poster
|
Fri 16:30
|
Why Do We Need Weight Decay in Modern Deep Learning?
Francesco D'Angelo · Maksym Andriushchenko · Aditya Vardhan Varre · Nicolas Flammarion
|
|
Poster
|
Wed 11:00
|
AWT: Transferring Vision-Language Models via Augmentation, Weighting, and Transportation
Yuhan Zhu · Yuyang Ji · Zhiyu Zhao · Gangshan Wu · Limin Wang
|
|
Poster
|
Fri 11:00
|
Unveiling and Mitigating Backdoor Vulnerabilities based on Unlearning Weight Changes and Backdoor Activeness
Weilin Lin · Li Liu · Shaokui Wei · Jianze Li · Hui Xiong
|
|
Poster
|
Fri 16:30
|
Pretraining with Random Noise for Fast and Robust Learning without Weight Transport
Jeonghwan Cheon · Sang Wan Lee · Se-Bum Paik
|
|
Poster
|
Wed 11:00
|
Shaving Weights with Occam's Razor: Bayesian Sparsification for Neural Networks using the Marginal Likelihood
Rayen Dhahri · Alexander Immer · Bertrand Charpentier · Stephan Günnemann · Vincent Fortuin
|
|
Poster
|
Fri 16:30
|
BitsFusion: 1.99 bits Weight Quantization of Diffusion Model
Yang Sui · Yanyu Li · Anil Kag · Yerlan Idelbayev · Junli Cao · Ju Hu · Dhritiman Sagar · Bo Yuan · Sergey Tulyakov · Jian Ren
|
|
Workshop
|
|
Neural Networks with Complex-Valued Weights Have No Spurious Local Minima
Xingtu Liu
|
|
Workshop
|
|
Increasing Fairness via Combination with Learning Guarantees
Yijun Bian · Kun Zhang
|
|
Workshop
|
|
SGD and Weight Decay Secretly Minimize the Rank of Your Neural Network
Tomer Galanti · Zachary Siegel · Aparna Gupte · Tomaso Poggio
|
|
Workshop
|
|
Adapting Foundation Models via Training-free Dynamic Weight Interpolation
Changdae Oh · Sharon Li · Kyungwoo Song · Sangdoo Yun · Dongyoon Han
|
|
Workshop
|
Sun 12:00
|
Deep activity propagation via weight initialization in spiking neural networks
Aurora Micheli · Olaf Booij · Jan van Gemert · Nergis Tomen
|
|
Workshop
|
|
NegMerge: Consensual Weight Negation for Strong Machine Unlearning
Hyoseo Kim · Dongyoon Han · Junsuk Choe
|
|