Timezone: »
Poster
Coded Sequential Matrix Multiplication For Straggler Mitigation
Nikhil Krishnan Muralee Krishnan · Seyederfan Hosseini · Ashish Khisti
In this work, we consider a sequence of $J$ matrix multiplication jobs which needs to be distributed by a master across multiple worker nodes. For $i\in \{1,2,\ldots,J\}$, job-$i$ begins in round-$i$ and has to be completed by round-$(i+T)$. Previous works consider only the special case of $T=0$ and focus on coding across workers. We propose here two schemes with $T>0$, which feature coding across workers as well as the dimension of time. Our first scheme is a modification of the polynomial coding scheme introduced by Yu et al. and places no assumptions on the straggler model. Exploitation of the temporal dimension helps the scheme handle a larger set of straggler patterns than the polynomial coding scheme, for a given computational load per worker per round. The second scheme assumes a particular straggler model to further improve performance (in terms of encoding/decoding complexity). We develop theoretical results establishing (i) optimality of our proposed schemes for a certain class of straggler patterns and (ii) improved performance for the case of i.i.d. stragglers. These are further validated by experiments, where we implement our schemes to train neural networks.
Author Information
Nikhil Krishnan Muralee Krishnan (University of Toronto)
Seyederfan Hosseini (University of Toronto)
Ashish Khisti (University of Toronto)
More from the Same Authors
-
2021 : Cross-Domain Lossy Compression as Optimal Transport with an Entropy Bottleneck »
Huan Liu · George Zhang · Jun Chen · Ashish Khisti -
2021 : Your Dataset is a Multiset and You Should Compress it Like One »
Daniel Severo · James Townsend · Ashish Khisti · Alireza Makhzani · Karen Ullrich -
2021 : Your Dataset is a Multiset and You Should Compress it Like One »
Daniel Severo · James Townsend · Ashish Khisti · Alireza Makhzani · Karen Ullrich -
2021 Poster: Universal Rate-Distortion-Perception Representations for Lossy Compression »
George Zhang · Jingjing Qian · Jun Chen · Ashish Khisti -
2021 Poster: Variational Model Inversion Attacks »
Kuan-Chieh Wang · YAN FU · Ke Li · Ashish Khisti · Richard Zemel · Alireza Makhzani -
2019 Poster: Information-Theoretic Generalization Bounds for SGLD via Data-Dependent Estimates »
Jeffrey Negrea · Mahdi Haghifam · Gintare Karolina Dziugaite · Ashish Khisti · Daniel Roy