Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Workshop on Advancing Neural Network Training (WANT): Computational Efficiency, Scalability, and Resource Optimization

Accelerating Deep Learning using Ivy

Guillermo Sanchez-Brizuela · Ved Patwardhan · Matthew Barrett · Paul Anderson · Mustafa Hani · Daniel Lenton


Abstract:

Today's machine learning (ML) ecosystem suffers from deep fragmentation due to the proliferation of numerous incompatible frameworks, compiler infrastructure and hardware. Each unique tool within this fragmented stack has its own set of benefits and drawbacks, making it better suited for certain use-cases. As a result, different areas of industry and academia use different tools for different use cases, which hinders collaboration and democratization, ultimately resulting in costly re-implementations and sub-optimal runtime efficiency when deploying, due to sparse and partial connections to the rest of the stack. In this paper, we present Ivy, a complementary, multi-backend ML framework, and its transpiler, which aims to bridge this gap and solve the fragmentation problem by enabling the integration of code from one framework into another to speed up research, development, and model inference.

Chat is not available.