Skip to yearly menu bar Skip to main content


Contributed Talk & Poster
in
Workshop: Workshop on Advancing Neural Network Training (WANT): Computational Efficiency, Scalability, and Resource Optimization

CoTFormer: More Tokens With Attention Make Up For Less Depth

Amirkeivan Mohtashami · Matteo Pagliardini · Martin Jaggi

[ ] [ Project Page ]
Sat 16 Dec 9:55 a.m. PST — 10 a.m. PST

Abstract:

The race to continually develop ever larger and deeper foundational models is underway. However, techniques like the Chain-of-Thought (CoT) method continue to play a pivotal role in achieving optimal downstream performance. In this study, we establish an approximate parallel between the utilization of the chain-of-thought and employing a deeper transformer. Building on this insight, we introduce CoTFormer, a transformer variant that employs an implicit CoT-like mechanism to achieve comparable performance to that of a deeper model. Our empirical findings demonstrate the effectiveness of CoTFormers, as they significantly outperform larger standard transformers.

Chat is not available.