Workshop: Machine Learning for Systems

Anna Goldie, Azalia Mirhoseini, Jonathan Raiman, Martin Maas, Xinlei XU

2020-12-12T09:00:00-08:00 - 2020-12-12T17:50:00-08:00
Abstract: **NeurIPS 2020 Workshop on Machine Learning for Systems**

Website: http://mlforsystems.org/

Submission Link: https://cmt3.research.microsoft.com/MLFS2020/Submission/Index

Important Dates:

Submission Deadline: **October 9th, 2020** (AoE)
Acceptance Notifications: October 23rd, 2020
Camera-Ready Submission: November 29th, 2020
Workshop: December 12th, 2020

Call for Papers:

Machine Learning for Systems is an interdisciplinary workshop that brings together researchers in computer systems and machine learning. This workshop is meant to serve as a platform to promote discussions between researchers in these target areas.

We invite submission of up to 4-page extended abstracts in the broad area of using machine learning in the design of computer systems. We are especially interested in submissions that move beyond using machine learning to replace numerical heuristics. This year, we hope to see novel system designs, streamlined cross-platform optimization, and new benchmarks for ML for Systems.

Accepted papers will be made available on the workshop website, but there will be no formal proceedings. Authors may therefore publish their work in other journals or conferences. The workshop will include invited talks from industry and academia as well as oral and poster presentations by workshop participants.

Areas of interest:

* Supervised, unsupervised, and reinforcement learning research with applications to:
- Systems Software
- Runtime Systems
- Distributed Systems
- Security
- Compilers, data structures, and code optimization
- Databases
- Computer architecture, microarchitecture, and accelerators
- Circuit design and layout
- Interconnects and Networking
- Storage
- Datacenters
* Representation learning for hardware and software
* Optimization of computer systems and software
* Systems modeling and simulation
* Implementations of ML for Systems and challenges
* High quality datasets for ML for Systems problems

Submission Instructions:

We welcome submissions of up to 4 pages (not including references). This is not a strict limit, but authors are encouraged to adhere to it if possible. All submissions must be in PDF format and should follow the NeurIPS 2020 format. Submissions do not have to be anonymized.

Please submit your paper no later than October 9th, 2020 midnight anywhere in the world to CMT (Link available soon).

Video

Chat

Chat is not available.

Schedule

2020-12-12T09:00:00-08:00 - 2020-12-12T09:15:00-08:00
Opening Remarks
2020-12-12T09:15:00-08:00 - 2020-12-12T09:50:00-08:00
Invited Speaker: Christina Delimitrou
Christina Delimitrou
2020-12-12T09:50:00-08:00 - 2020-12-12T10:25:00-08:00
Invited Speaker: Ed Chi
Ed Chi
2020-12-12T10:25:00-08:00 - 2020-12-12T10:40:00-08:00
Break
2020-12-12T10:40:00-08:00 - 2020-12-12T10:50:00-08:00
Program Graphs for Machine Learning
Chris Cummins
2020-12-12T10:50:00-08:00 - 2020-12-12T11:00:00-08:00
DEff-ARTS: Differentiable Efficient Architecture Search
Sulaiman Sadiq
2020-12-12T11:00:00-08:00 - 2020-12-12T11:10:00-08:00
Learning Local Advantage Functions for Generalizable Graph Optimizations
Yifan Wu
2020-12-12T11:10:00-08:00 - 2020-12-12T11:20:00-08:00
A Deep Learning Based Cost Model for Automatic Code Optimization
Riyadh Baghdadi
2020-12-12T11:20:00-08:00 - 2020-12-12T11:30:00-08:00
Q&A (Talks #1-4)
2020-12-12T11:30:00-08:00 - 2020-12-12T12:05:00-08:00
Invited Speaker: Bryan Catanzaro
Bryan Catanzaro
2020-12-12T12:05:00-08:00 - 2020-12-12T13:00:00-08:00
Break
2020-12-12T13:00:00-08:00 - 2020-12-12T13:10:00-08:00
NVCell: Generate Standard Cell Layout in Advanced Technology Nodes with Reinforcement Learning
Mark Ren
2020-12-12T13:10:00-08:00 - 2020-12-12T13:20:00-08:00
A General Framework For VLSI Tool Parameter Optimization with Deep Reinforcement Learning
Anthony Agnesina
2020-12-12T13:20:00-08:00 - 2020-12-12T13:30:00-08:00
Learned Hardware/Software Co-Design of Neural Accelerators
Zhan Shi
2020-12-12T13:30:00-08:00 - 2020-12-12T13:40:00-08:00
Apollo: Transferable Architecture Exploration
Amir Yazdanbakhsh
2020-12-12T13:40:00-08:00 - 2020-12-12T13:50:00-08:00
Q&A (Talks #5-8)
2020-12-12T13:50:00-08:00 - 2020-12-12T14:25:00-08:00
Invited Speaker: Benoit Steiner
Benoit Steiner
2020-12-12T14:25:00-08:00 - 2020-12-12T14:35:00-08:00
Learned Indexes for a Google-scale Disk-based Database
Deniz Altınbüken
2020-12-12T14:35:00-08:00 - 2020-12-12T14:45:00-08:00
Matrix Profile Index Prediction for Streaming Time Series
Maryam Shahcheraghi
2020-12-12T14:45:00-08:00 - 2020-12-12T14:55:00-08:00
Optimizing Memory Placement using Evolutionary Graph Reinforcement Learning
Somdeb Majumdar
2020-12-12T14:55:00-08:00 - 2020-12-12T15:05:00-08:00
MicroPlace: Placing Micro Virtual Machines with Hindsight Imitation
Bharathan Balaji
2020-12-12T15:05:00-08:00 - 2020-12-12T15:15:00-08:00
Q&A (Talks #9-12)
2020-12-12T15:15:00-08:00 - 2020-12-12T15:30:00-08:00
Break
2020-12-12T15:30:00-08:00 - 2020-12-12T16:05:00-08:00
Invited Speaker: Justin Gottschlich
Justin Gottschlich
2020-12-12T16:05:00-08:00 - 2020-12-12T16:15:00-08:00
Resonance: Replacing Software Constants with Context-Aware Models in Real-time Communication
Jayant Gupchup
2020-12-12T16:15:00-08:00 - 2020-12-12T16:25:00-08:00
CADET: A Systematic Method For Debugging Misconfigurations using Counterfactual Reasoning
Shahriar Iqbal
2020-12-12T16:25:00-08:00 - 2020-12-12T16:35:00-08:00
The Law of Attraction: Affinity-Aware Placement Optimization using Graph Neural Networks
Yi-Chen Lu
2020-12-12T16:35:00-08:00 - 2020-12-12T16:45:00-08:00
Highly Available Data Parallel ML training on Mesh Networks
Sameer Kumar
2020-12-12T16:45:00-08:00 - 2020-12-12T16:55:00-08:00
ControlFlag: A Self-supervised Idiosyncratic Pattern Detection System for Software Control Structures
Niranjan Hasabnis
2020-12-12T16:55:00-08:00 - 2020-12-12T17:05:00-08:00
Q&A (Talks #13-17)
2020-12-12T17:05:00-08:00 - 2020-12-12T17:40:00-08:00
Invited Speaker: Kunle Olukotun
Kunle Olukotun
2020-12-12T17:40:00-08:00 - 2020-12-12T17:50:00-08:00
Closing Remarks
Iterative Value Learning for ThroughputOptimization of Deep Learning Workloads
Benoit Steiner