Skip to yearly menu bar Skip to main content


Workshop

ML with New Compute Paradigms

Jannes Gladrow · Babak Rahmani · Julie Grollier · Peter McMahon · Ruqi Zhang · Jack Kendall

West Meeting Room 114, 115

Sun 15 Dec, 8:15 a.m. PST

Digital computing is approaching fundamental limits and faces serious challenges in terms of scalability, performance, and sustainability. At the same time, generative AI is fuelling an explosion in compute demand. There is, thus, a growing need to explore non-traditional computing paradigms, such as (opto-)analog, neuromorphic hardware, and physical systems.Expanding on last year's successful NeurIPS workshop, which was the first of its kind in this community, we aim to bring together researchers from machine learning and alternative computation fields to establish new synergies between ML models and non-traditional hardware. Co-designing models with specialized hardware, a feature that has also been key to the synergy of digital chips like GPUs and deep learning, has the potential to offer a step change in the efficiency and sustainability of machine learning at scale. Beyond speeding up standard deep learning, new hardware may open the door for efficient inference and training of model classes that have been limited by compute resources, such as energy-based models and deep equilibrium models. So far, however, these hardware technologies have fallen short due to inherent noise, device mismatch, a limited set of compute operations, and reduced bit-depth. As a community, we need to develop new models and algorithms that can embrace and, in fact, exploit these characteristics. This workshop aims to encourage cross-disciplinary collaboration to exploit the opportunities offered by emerging AI accelerators both at training and at inference.

Live content is unavailable. Log in and register to view live content

Timezone: America/Los_Angeles

Schedule

Log in and register to view live content