Workshops
Courtney Paquette · Sebastian Stich · Quanquan Gu · Cristóbal Guzmán · John Duchi

[ Physical ]

OPT 2022 will bring experts in optimization to share their perspectives while leveraging crossover experts in ML to share their views and recent advances. OPT 2022 honors this tradition of bringing together people from optimization and from ML in order to promote and generate new interactions between the two communities.

To foster the spirit of innovation and collaboration, a goal of this workshop, OPT 2022 will focus the contributed talks on research in Reliable Optimization Methods for ML. Many optimization algorithms for ML were originally developed with the goal of handling computational constraints (e.g., stochastic gradient based algorithms). Moreover, the analyses of these algorithms followed the classical optimization approach where one measures the performances of algorithms based on (i) the computation cost and (ii) convergence for any input into the algorithm. As engineering capabilities increase and the wide adoption of ML into many real world usages, practitioners of ML are seeking optimization algorithms that go beyond finding the minimizer with the fastest algorithm. They want reliable methods that solve real-world complications that arise. For example, increasingly bad actors are attempting to fool models with deceptive data. This leads to questions such as what algorithms are more robust to adversarial …

Noga Zaslavsky · Mycal Tucker · Sarah Marzen · Irina Higgins · Stephanie Palmer · Samuel J Gershman

[ Physical ]

Many cognitive and neural systems can be described in terms of compression and transmission of information given bounded resources. While information theory, as a principled mathematical framework for characterizing such systems, has been widely applied in neuroscience and machine learning, its role in understanding cognition has traditionally been contested. This traditional view has been changing in recent years, with growing evidence that information-theoretic optimality principles underlie a wide range of cognitive functions, including perception, working memory, language, and decision making. In parallel, there has also been a surge of contemporary information-theoretic approaches in machine learning, enabling large-scale neural-network implementation of information-theoretic models.

These scientific and technological developments open up new avenues for progress toward an integrative computational theory of human and artificial cognition, by leveraging information-theoretic principles as bridges between various cognitive functions and neural representations. This workshop aims to explore these new research directions and bring together researchers from machine learning, cognitive science, neuroscience, linguistics, economics, and potentially other fields, who are interested in integrating information-theoretic approaches that have thus far been studied largely independently of each other. In particular, we aim to discuss questions and exchange ideas along the following directions:

- Understanding human cognition: To what extent …

Yuxi Li · Emma Brunskill · MINMIN CHEN · Omer Gottesman · Lihong Li · Yao Liu · Zhiwei Tony Qin · Matthew Taylor

[ Physical ]

Discover how to improve the adoption of RL in practice, by discussing key research problems, SOTA, and success stories / insights / lessons w.r.t. practical RL algorithms, practical issues, and applications with leading experts from both academia and industry @ NeurIPS 2022 RL4RealLife workshop.

Aviral Kumar · Rishabh Agarwal · Aravind Rajeswaran · Wenxuan Zhou · George Tucker · Doina Precup

[ Physical ]

While offline RL focuses on learning solely from fixed datasets, one of the main learning points from the previous edition of offline RL workshop was that large-scale RL applications typically want to use offline RL as part of a bigger system as opposed to being the end-goal in itself. Thus, we propose to shift the focus from algorithm design and offline RL applications to how offline RL can be a launchpad , i.e., a tool or a starting point, for solving challenges in sequential decision-making such as exploration, generalization, transfer, safety, and adaptation. Particularly, we are interested in studying and discussing methods for learning expressive models, policies, skills and value functions from data that can help us make progress towards efficiently tackling these challenges, which are otherwise often intractable.


Submission site: https://openreview.net/group?id=NeurIPS.cc/2022/Workshop/Offline_RL. The submission deadline is September 25, 2022 (Anywhere on Earth). Please refer to the submission page for more details.

Mihaela van der Schaar · Zhaozhi Qian · Sergul Aydore · Dimitris Vlitas · Dino Oglic · Tucker Balch

[ Physical ]

Advances in machine learning owe much to the public availability of high-quality benchmark datasets and the well-defined problem settings that they encapsulate. Examples are abundant: CIFAR-10 for image classification, COCO for object detection, SQuAD for question answering, BookCorpus for language modelling, etc. There is a general belief that the accessibility of high-quality benchmark datasets is central to the thriving of our community.

However, three prominent issues affect benchmark datasets: data scarcity, privacy, and bias. They already manifest in many existing benchmarks, and also make the curation and publication of new benchmarks difficult (if not impossible) in numerous high-stakes domains, including healthcare, finance, and education. Hence, although ML holds strong promise in these domains, the lack of high-quality benchmark datasets creates a significant hurdle for the development of methodology and algorithms and leads to missed opportunities.

Synthetic data is a promising solution to the key issues of benchmark dataset curation and publication. Specifically, high-quality synthetic data generation could be done while addressing the following major issues.

1. Data Scarcity. The training and evaluation of ML algorithms require datasets with a sufficient sample size. Note that even if the algorithm can learn from very few samples, we still need sufficient validation data …

DOU QI · Konstantinos Kamnitsas · Yuankai Huo · Xiaoxiao Li · Daniel Moyer · Danielle Pace · Jonas Teuwen · Islem Rekik

[ Physical ]

Nathan Ng · Haoran Zhang · Vinith Suriyakumar · Chantal Shaib · Kyunghyun Cho · Yixuan Li · Alice Oh · Marzyeh Ghassemi

[ Physical ]

As machine learning models find increasing use in the real world, ensuring their safe and reliable deployment depends on ensuring their robustness to distribution shift. This is especially true for sequential data, which occurs naturally in various data domains such as natural language processing, healthcare, computational biology, and finance. However, building models for sequence data which are robust to distribution shifts presents a unique challenge. Sequential data are often discrete rather than continuous, exhibit difficult to characterize distributions, and can display a much greater range of types of distributional shifts. Although many methods for improving model robustness exist for imaging or tabular data, extending these methods to sequential data is a challenging research direction that often requires fundamentally different techniques.

This workshop aims to facilitate progress towards improving the distributional robustness of models trained on sequential data by bringing together researchers to tackle a wide variety of research questions including, but not limited to:
(1) How well do existing robustness methods work on sequential data, and why do they succeed or fail?
(2) How can we leverage the sequential nature of the data to develop novel and distributionally robust methods?
(3) How do we construct and utilize formalisms for distribution …

Ismini Lourentzou · Joy T Wu · Satyananda Kashyap · Alexandros Karargyris · Leo Anthony Celi · Ban Kawas · Sachin S Talathi

[ Physical ]

Eye gaze has proven to be a cost-efficient way to collect large-scale physiological data that can reveal the underlying human attentional patterns in real-life workflows, and thus has long been explored as a signal to directly measure human-related cognition in various domains. Physiological data (including but not limited to eye gaze) offer new perception capabilities, which could be used in several ML domains, e.g., egocentric perception, embodied AI, NLP, etc. They can help infer human perception, intentions, beliefs, goals, and other cognition properties that are much needed for human-AI interactions and agent coordination. In addition, large collections of eye-tracking data have enabled data-driven modeling of human visual attention mechanisms, both for saliency or scanpath prediction, with twofold advantages: from the neuroscientific perspective to understand biological mechanisms better, and from the AI perspective to equip agents with the ability to mimic or predict human behavior and improve interpretability and interactions.

With the emergence of immersive technologies, now more than any time there is a need for experts of various backgrounds (e.g., machine learning, vision, and neuroscience communities) to share expertise and contribute to a deeper understanding of the intricacies of cost-efficient human supervision signals (e.g., eye-gaze) and their utilization towards by …

Yann Dauphin · David Lopez-Paz · Vikas Verma · Boyi Li

[ Physical ]

Goals

Interpolation regularizers are an increasingly popular approach to regularize deep models. For example, the mixup data augmentation method constructs synthetic examples by linearly interpolating random pairs of training data points. During their half-decade lifespan, interpolation regularizers have become ubiquitous and fuel state-of-the-art results in virtually all domains, including computer vision and medical diagnosis. This workshop brings together researchers and users of interpolation regularizers to foster research and discussion to advance and understand interpolation regularizers. This inaugural meeting will have no shortage of interactions and energy to achieve these exciting goals. Suggested topics include, but are not limited to the intersection between interpolation regularizers and:

* Domain generalization
* Semi-supervised learning
* Privacy-preserving ML
* Theory
* Robustness
* Fairness
* Vision
* NLP
* Medical applications

## Important dates

* Paper submission deadline: September 22, 2022
* Paper acceptance notification: October 14, 2022
* Workshop: December 2, 2022

## Call for papers

Authors are invited to submit short papers with up to 4 pages, but unlimited number of pages for references and supplementary materials. The submissions must be anonymized as the reviewing process will be double-blind. Please use the NeurIPS template for submissions. We welcome submissions that have been …

Manuela Veloso · John Dickerson · Senthil Kumar · Eren K. · Jian Tang · Jie Chen · Peter Henstock · Susan Tibbs · Ani Calinescu · Naftali Cohen · C. Bayan Bruss · Armineh Nourbakhsh

[ Virtual ]

Pan Lu · Swaroop Mishra · Sean Welleck · Yuhuai Wu · Hannaneh Hajishirzi · Percy Liang

[ Physical ]

Mathematical reasoning is a unique aspect of human intelligence and a fundamental building block for scientific and intellectual pursuits. However, learning mathematics is often a challenging human endeavor that relies on expert instructors to create, teach and evaluate mathematical material. From an educational perspective, AI systems that aid in this process offer increased inclusion and accessibility, efficiency, and understanding of mathematics. Moreover, building systems capable of understanding, creating, and using mathematics offers a unique setting for studying reasoning in AI. This workshop will investigate the intersection of mathematics education and AI.

Alon Albalak · Colin Raffel · Chunting Zhou · Deepak Ramachandran · Xuezhe Ma · Sebastian Ruder

[ Physical ]

Transfer learning from large pre-trained language models (PLM) has become the de-facto method for a wide range of natural language processing tasks. Current transfer learning methods, combined with PLMs, have seen outstanding successes in transferring knowledge to new tasks, domains, and even languages. However, existing methods, including fine-tuning, in-context learning, parameter-efficient tuning, semi-parametric models with knowledge augmentation, etc., still lack consistently good performance across different tasks, domains, varying sizes of data resources, and diverse textual inputs.

This workshop aims to invite researchers from different backgrounds to share their latest work in efficient and robust transfer learning methods, discuss challenges and risks of transfer learning models when deployed in the wild, understand positive and negative transfer, and also debate over future directions.

Ishan Misra · Pengtao Xie · Gul Varol · Yale Song · Yuki Asano · Xiaolong Wang · Pauline Luc

[ Physical ]

Ritwik Gupta · Robin Murphy · Eric Heim · Guido Zarrella

[ Physical ]

Humanitarian crises from disease outbreak to war to oppression against disadvantaged groups have threatened people and their communities throughout history. Natural disasters are a single, extreme example of such crises. In the wake of hurricanes, earthquakes, and other such crises, people have ceaselessly sought ways--often harnessing innovation--to provide assistance to victims after disasters have struck.

Through this workshop, we intend to establish meaningful dialogue between the Artificial Intelligence (AI) and Humanitarian Assistance and Disaster Response (HADR) communities. By the end of the workshop, the NeurIPS research community can learn the practical challenges of aiding those in crisis, while the HADR community can get to know the state of art and practice in AI. We seek to establish a pipeline of transitioning the research created by the NeurIPS community to real-world humanitarian issues. We believe such an endeavor is possible due to recent successes in applying techniques from various AI and Machine Learning (ML) disciplines to HADR.

Sophia Sanborn · Christian Shewmake · Simone Azeglio · Arianna Di Bernardo · Nina Miolane

[ Physical ]

Max A Wiesner
Arturo Deza · Joshua Peterson · N Apurva Ratan Murty · Tom Griffiths

[ Physical ]

Michael Poli · Winnie Xu · Estefany Kelly Buchanan · Maryam Hosseini · Luca Celotti · Martin Magill · Ermal Rrapaj · Stefano Massaroli · Patrick Kidger · Archis Joglekar · Animesh Garg · David Duvenaud

[ Virtual ]

Santiago Miret · Marta Skreta · Zamyla Morgan-Chan · Benjamin Sanchez-Lengeling · Shyue Ping Ong · Alan Aspuru-Guzik

[ Physical ]

Self-Driving Materials Laboratories have greatly advanced the automation of material design and discovery. They require the integration of diverse fields and consist of three primary components, which intersect with many AI-related research topics:

- AI-Guided Design. This component intersects heavily with algorithmic research at NeurIPS, including (but not limited to) various topic areas such as: Reinforcement Learning and data-driven modeling of physical phenomena using Neural Networks (e.g. Graph Neural Networks and Machine Learning For Physics).

- Automated Chemical Synthesis. This component intersects significantly with robotics research represented at NeurIPS, and includes several parts of real-world robotic systems such as: managing control systems (e.g. Reinforcement Learning) and different sensor modalities (e.g. Computer Vision), as well as predictive models for various phenomena (e.g. Data-Based Prediction of Chemical Reactions).

- Automated Material Characterization. This component intersects heavily with a diverse set of supervised learning techniques that are well-represented at NeurIPS such as: computer vision for microscopy images and automated machine learning based analysis of data generated from different kinds of instruments (e.g. X-Ray based diffraction data for determining material structure).

Jiaxuan You · Marinka Zitnik · Rex Ying · Yizhou Sun · Hanjun Dai · Stefanie Jegelka

[ Physical ]

Background. In recent years, graph learning has quickly grown into an established sub-field of machine learning. Researchers have been focusing on developing novel model architectures, theoretical understandings, scalable algorithms and systems, and successful applications across industry and science regarding graph learning. In fact, more than 5000 research papers related to graph learning have been published over the past year alone.

Challenges. Despite the success, existing graph learning paradigms have not captured the full spectrum of relationships in the physical and the virtual worlds. For example, in terms of applicability of graph learning algorithms, current graph learning paradigms are often restricted to datasets with explicit graph representations, whereas recent works have shown promise of graph learning methods for applications without explicit graph representations. In terms of usability, while popular graph learning libraries greatly facilitate the implementation of graph learning techniques, finding the right graph representation and model architecture for a given use case still requires heavy expert knowledge. Furthermore, in terms of generalizability, unlike domains such as computer vision and natural language processing where large-scale pre-trained models generalize across downstream applications with little to no fine-tuning and demonstrate impressive performance, such a paradigm has yet to succeed in the graph learning …

Jian Lou · Zhiguang Wang · Bo Li · Dawn Song

[ Physical ]

This workshop focuses on how future researchers and practitioners should prepare themselves for achieving security and privacy in machine learning through decentralization and blockchain techniques, as well as how to leverage machine learning techniques to automate some processes in current decentralized systems and ownership economies in web3. We attempt to share recent related work from different communities, discuss the foundations of trustworthiness problems in machine learning and potential solutions, toolings, platforms via decentralization, blockchain and web3, and chart out important directions for future work and cross-community collaborations.

Arno Blaas · Sahra Ghalebikesabi · Javier Antoran · Fan Feng · Melanie F. Pradier · Ian Mason · David Rohde

[ Physical ]

Deep learning has flourished in the last decade. Recent breakthroughs have shown stunning results, and yet, researchers still cannot fully explain why neural networks generalise so well or why some architectures or optimizers work better than others. There is a lack of understanding of existing deep learning systems, which led NeurIPS 2017 test of time award winners Rahimi & Recht to compare machine learning with alchemy and to call for the return of the 'rigour police'.

Despite excellent theoretical work in the field, deep neural networks are so complex that they might not be able to be fully comprehended with theory alone. Unfortunately, the experimental alternative - rigorous work that neither proves a theorem nor proposes a new method - is currently under-valued in the machine learning community.

To change this, this workshop aims to promote the method of empirical falsification.

We solicit contributions which explicitly formulate a hypothesis related to deep learning or its applications (based on first principles or prior work), and then empirically falsify it through experiments. We further encourage submissions to go a layer deeper and investigate the causes of an initial idea not working as expected. This workshop will showcase how negative results offer important …

Mehdi Rezagholizadeh · Peyman Passban · Yue Dong · Lili Mou · Pascal Poupart · Ali Ghodsi · Qun Liu

[ Physical ]

The second version of the Efficient Natural Language and Speech Processing (ENLSP-II) workshop focuses on fundamental and challenging problems to make natural language and speech processing (especially pre-trained models) more efficient in terms of Data, Model, Training, and Inference. The workshop program offers an interactive platform for gathering different experts and talents from academia and industry through invited talks, panel discussion, paper submissions, reviews, interactive
posters, oral presentations and a mentorship program. This will be a unique opportunity to address the efficiency issues of current models, build connections, exchange ideas and brainstorm solutions, and foster future collaborations. The topics of this workshop can be of interest for people working on general machine learning, deep learning, optimization, theory and NLP & Speech applications.

Shiqiang Wang · Nathalie Baracaldo Angel · Olivia Choudhury · Gauri Joshi · Peter Richtarik · Praneeth Vepakomma · Han Yu

[ Physical ]

Training machine learning models in a centralized fashion often faces significant challenges due to regulatory and privacy concerns in real-world use cases. These include distributed training data, computational resources to create and maintain a central data repository, and regulatory guidelines (GDPR, HIPAA) that restrict sharing sensitive data. Federated learning (FL) is a new paradigm in machine learning that can mitigate these challenges by training a global model using distributed data, without the need for data sharing. The extensive application of machine learning to analyze and draw insight from real-world, distributed, and sensitive data necessitates familiarization with and adoption of this relevant and timely topic among the scientific community.

Despite the advantages of FL, and its successful application in certain industry-based cases, this field is still in its infancy due to new challenges that are imposed by limited visibility of the training data, potential lack of trust among participants training a single model, potential privacy inferences, and in some cases, limited or unreliable connectivity.

The goal of this workshop is to bring together researchers and practitioners interested in FL. This day-long event will facilitate interaction among students, scholars, and industry professionals from around the world to understand the topic, identify technical …

Tom White · Yingtao Tian · Lia Coleman

[ Virtual ]

Chen Tang · Karen Leung · Leilani Gilpin · Jiachen Li · Changliu Liu

[ Physical ]

The recent advances in deep learning and artificial intelligence have equipped autonomous agents with increasing intelligence, which enables human-level performance in challenging tasks. In particular, these agents with advanced intelligence have shown great potential in interacting and collaborating with humans (e.g., self-driving cars, industrial robot co-worker, smart homes and domestic robots). However, the opaque nature of deep learning models makes it difficult to decipher the decision-making process of the agents, thus preventing stakeholders from readily trusting the autonomous agents, especially for safety-critical tasks requiring physical human interactions. In this workshop, we bring together experts with diverse and interdisciplinary backgrounds, to build a roadmap for developing and deploying trustworthy interactive autonomous systems at scale. Specifically, we aim to the following questions: 1) What properties are required for building trust between humans and interactive autonomous systems? How can we assess and ensure these properties without compromising the expressiveness of the models and performance of the overall systems? 2) How can we develop and deploy trustworthy autonomous agents under an efficient and trustful workflow? How should we transfer from development to deployment? 3) How to define standard metrics to quantify trustworthiness, from regulatory, theoretical, and experimental perspectives? How do we know that the …

Mariya Toneva · Javier Turek · Vy Vo · Shailee Jain · Kenneth Norman · Alexander Huth · Uri Hasson · Mihai Capotă

[ Physical ]

Atilim Gunes Baydin · Adji Bousso Dieng · Emine Kucukbenli · Gilles Louppe · Siddharth Mishra-Sharma · Benjamin Nachman · Brian Nord · Savannah Thais · Anima Anandkumar · Kyle Cranmer · Lenka Zdeborová

[ Physical ]

The Machine Learning and the Physical Sciences workshop aims to provide an informal, inclusive and leading-edge venue for research and discussions at the interface of machine learning (ML) and the physical sciences. This interface spans (1) applications of ML in physical sciences (ML for physics), (2) developments in ML motivated by physical insights (physics for ML), and most recently (3) convergence of ML and physical sciences (physics with ML) which inspires questioning what scientific understanding means in the age of complex-AI powered science, and what roles machine and human scientists will play in developing scientific understanding in the future.

Elizabeth Wood · Adji Bousso Dieng · Aleksandrina Goeva · Alex X Lu · Anshul Kundaje · Chang Liu · Debora Marks · Ed Boyden · Eli N Weinstein · Lorin Crawford · Mor Nitzan · Romain Lopez · Tamara Broderick · Ray Jones · Wouter Boomsma · Yixin Wang

[ Virtual ]

Albert Berahas · Jelena Diakonikolas · Jarad Forristal · Brandon Reese · Martin Takac · Yan Xu

[ Physical ]

Optimization is a cornerstone of nearly all modern machine learning (ML) and deep learning (DL). Simple first-order gradient-based methods dominate the field for convincing reasons: low computational cost, simplicity of implementation, and strong empirical results.

Yet second- or higher-order methods are rarely used in DL, despite also having many strengths: faster per-iteration convergence, frequent explicit regularization on step-size, and better parallelization than SGD. Additionally, many scientific fields use second-order optimization with great success.

A driving factor for this is the large difference in development effort. By the time higher-order methods were tractable for DL, first-order methods such as SGD and it’s main varients (SGD + Momentum, Adam, …) already had many years of maturity and mass adoption.

The purpose of this workshop is to address this gap, to create an environment where higher-order methods are fairly considered and compared against one-another, and to foster healthy discussion with the end goal of mainstream acceptance of higher-order methods in ML and DL.

Abhijat Biswas · Reuben Aronson · Khimya Khetarpal · Akanksha Saran · Ruohan Zhang · Grace Lindsay · Scott Niekum

[ Physical ]

Attention is a widely popular topic studied in many fields such as neuroscience, psychology, and machine learning. A better understanding and conceptualization of attention in both humans and machines has led to significant progress across fields. At the same time, attention is far from a clear or unified concept, with many definitions within and across multiple fields.

Cognitive scientists study how the brain flexibly controls its limited computational resources to accomplish its objectives. Inspired by cognitive attention, machine learning researchers introduce attention as an inductive bias in their models to improve performance or interpretability. Human-computer interaction designers monitor people’s attention during interactions to implicitly detect aspects of their mental states.

While the aforementioned research areas all consider attention, each formalizes and operationalizes it in different ways. Bridging this gap will facilitate:
- (Cogsci for AI) More principled forms of attention in AI agents towards more human-like abilities such as robust generalization, quicker learning and faster planning.
- (AI for cogsci) Developing better computational models for modeling human behaviors that involve attention.
- (HCI) Modeling attention during interactions from implicit signals for fluent and efficient coordination
- (HCI/ML) Artificial models of algorithmic attention to enable intuitive interpretations of deep models?

Madelon Hulsebos · Haoyu Dong · Bojan Karlaš · Laurel Orr · Pengcheng Yin

[ Physical ]

We develop large models to “understand” images, videos and natural language that fuel many intelligent applications from text completion to self-driving cars. But tabular data has long been overlooked despite its dominant presence in data-intensive systems. By learning latent representations from (semi-)structured tabular data, pretrained table models have shown preliminary but impressive performance for semantic parsing, question answering, table understanding, and data preparation. Considering that such tasks share fundamental properties inherent to tables, representation learning for tabular data is an important direction to explore further. These works also surfaced many open challenges such as finding effective data encodings, pretraining objectives and downstream tasks.

Key questions that we aim to address in this workshop are:
- How should tabular data be encoded to make learned Table Models generalize across tasks?
- Which pre-training objectives, architectures, fine-tuning and prompting strategies, work for tabular data?
- How should the varying formats, data types, and sizes of tables be handled?
- To what extend can Language Models be adapted towards tabular data tasks and what are their limits?
- What tasks can existing Table Models accomplish well and what opportunities lie ahead?
- How do existing Table Models perform, what do they learn, where …

Roshan Rao · Jonas Adler · Namrata Anand · John Ingraham · Sergey Ovchinnikov · Ellen Zhong

[ Physical ]

Alex Bewley · Roberto Calandra · Anca Dragan · Igor Gilitschenski · Emily Hannigan · Masha Itkina · Hamidreza Kasaei · Jens Kober · Danica Kragic · Nathan Lambert · Julien PEREZ · Fabio Ramos · Ransalu Senanayake · Jonathan Tompson · Vincent Vanhoucke · Markus Wulfmeier

[ Virtual ]

Machine learning (ML) has been one of the premier drivers of recent advances in robotics research and has made its way into impacting several real-world robotic applications in unstructured and human-centric environments, such as transportation, healthcare, and manufacturing. At the same time, robotics has been a key motivation for numerous research problems in artificial intelligence research, from efficient algorithms to robust generalization of decision models. However, there are still considerable obstacles to fully leveraging state-of-the-art ML in real-world robotics applications. For capable robots equipped with ML models, guarantees on the robustness and additional analysis of the social implications of these models are required for their utilization in real-world robotic domains that interface with humans (e.g. autonomous vehicles, and tele-operated or assistive robots).

To support the development of robots that are safely deployable among humans, the field must consider trustworthiness as a central aspect in the development of real-world robot learning systems. Unlike many other applications of ML, the combined complexity of physical robotic platforms and learning-based perception-action loops presents unique technical challenges. These challenges include concrete technical problems such as very high performance requirements, explainability, predictability, verification, uncertainty quantification, and robust operation in dynamically distributed, open-set domains. Since robots are …

Alessandra Tosi · Andrei Paleyes · Christian Cabrera · Fariba Yousefi · S Roberts

[ Virtual ]

he goal of this event is to bring together people from different communities with the common interest in the Deployment of Machine Learning Systems.

With the dramatic rise of companies dedicated to providing Machine Learning software-as-a-service tools, Machine Learning has become a tool for solving real world problems that is increasingly more accessible in many industrial and social sectors. With the growth in number of deployments, also grows the number of known challenges and hurdles that practitioners face along the deployment process to ensure the continual delivery of good performance from deployed Machine Learning systems. Such challenges can lie in adoption of ML algorithms to concrete use cases, discovery and quality of data, maintenance of production ML systems, as well as ethics.

Yi Ding · Yuanqi Du · Tianfan Fu · Hanchen Wang · Anima Anandkumar · Yoshua Bengio · Anthony Gitter · Carla Gomes · Aviv Regev · Max Welling · Marinka Zitnik

[ Physical ]

Kianté Brantley · Soham Dan · Ji Ung Lee · Khanh Nguyen · Edwin Simpson · Alane Suhr · Yoav Artzi

[ Physical ]

Reihaneh Rabbany · Jian Tang · Michael Bronstein · Shenyang Huang · Meng Qu · Kellin Pelrine · Jianan Zhao · Farimah Poursafaei · Aarash Feizi

[ Physical ]

This workshop bridges the conversation among different areas such as temporal knowledge graph learning, graph anomaly detection, and graph representation learning. It aims to share understanding and techniques to facilitate the development of novel temporal graph learning methods. It also brings together researchers from both academia and industry and connects researchers from various fields aiming to span theories, methodologies, and applications.

Mengjiao Yang · Yilun Du · Jack Parker-Holder · Siddharth Karamcheti · Igor Mordatch · Shixiang (Shane) Gu · Ofir Nachum

[ Physical ]

Yingzhen Li · Yang Song · Valentin De Bortoli · Francois-Xavier Briol · Wenbo Gong · Alexia Jolicoeur-Martineau · Arash Vahdat

[ Physical ]

Sara Hooker · Rosanne Liu · Pablo Samuel Castro · FatemehSadat Mireshghallah · Sunipa Dev · Federico Carnevale · Benjamin Rosman · João Madeira Araújo · Savannah Thais

[ Physical ]

Huaxiu Yao · Frank Hutter · Eleni Triantafillou · Fabio Ferreira · Joaquin Vanschoren · Qi Lei

[ Physical ]

Neel Kant · Martin Maas · Azade Nazi · Benoit Steiner · Xinlei XU · Dan Zhang

[ Physical ]

Wei Pan · Shanghang Zhang · Pradeep Ravikumar · Vittorio Ferrari · Fisher Yu · Hao Dong · Xin Wang

[ Physical ]

Karol Hausman · Qi Zhang · Matthew Taylor · Martha White · Suraj Nair · Manan Tomar · Risto Vuorio · Ted Xiao · Zeyu Zheng

[ Virtual ]

Alex Hanna · Rida Qadri · Fernando Diaz · Nick Seaver · Morgan Scheuerman

[ Virtual ]

Huan Zhang · Linyi Li · Chaowei Xiao · J. Zico Kolter · Anima Anandkumar · Bo Li

[ Virtual ]

To address these negative societal impacts of ML, researchers have looked into different principles and constraints to ensure trustworthy and socially responsible machine learning systems. This workshop makes the first attempt towards bridging the gap between security, privacy, fairness, ethics, game theory, and machine learning communities and aims to discuss the principles and experiences of developing trustworthy and socially responsible machine learning systems. The workshop also focuses on how future researchers and practitioners should prepare themselves for reducing the risks of unintended behaviors of sophisticated ML models.

This workshop aims to bring together researchers interested in the emerging and interdisciplinary field of trustworthy and socially responsible machine learning from a broad range of disciplines with different perspectives to this problem. We attempt to highlight recent related work from different communities, clarify the foundations of trustworthy machine learning, and chart out important directions for future work and cross-community collaborations.

Rediet Abebe · Moritz Hardt · Angela Jin · Ludwig Schmidt · Mírian Silva · Tainá Turella · Rebecca Wexler

[ Virtual ]

As the applications of machine learning have increased over the past decade, so have the associated performance expectations. With machine learning now being deployed in safety- and security-critical areas such as autonomous vehicles and healthcare, the research community has begun to scrutinize the generalization capabilities of current machine learning models closely. One prominent research thread in reliable machine learning is taking an explicitly adversarial perspective, e.g., the widely-studied phenomenon of adversarial examples in computer vision and other domains. Researchers have proposed a multitude of attacks on trained models and new training algorithms to increase the robustness of such attacks. In this workshop, we connect this burgeoning research field to an important application area that offers a clear motivation for an explicitly adversarial perspective: the U.S. criminal legal system.

Jiachen Li · Nigamaa Nayakanti · Xinshuo Weng · Daniel Omeiza · Ali Baheri · German Ros · Rowan McAllister

[ Physical ]

Andrey Kormilitzin · Dan Joyce · Nenad Tomasev · Kevin McKee

[ Virtual ]

Jacob Steinhardt · Victoria Krakovna · Dan Hendrycks · Nicholas Carlini · Dawn Song

[ Virtual ]

Sören Becker · Alexis Bellot · Cecilia Casolo · Niki Kilbertus · Sara Magliacane · Yuyang (Bernie) Wang

[ Physical ]

Divyansh Kaushik · Jennifer Hsia · Jessica Huynh · Yonadav Shavit · Samuel Bowman · Ting-Hao Huang · Douwe Kiela · Zachary Lipton · Eric Michael Smith

[ Physical ]

Michael Muller · Plamen P Angelov · Hal Daumé III · Shion Guha · Q.Vera Liao · Nuria Oliver · David Piorkowski

[ Virtual ]

Laetitia Teodorescu · Laura Ruis · Tristan Karch · Cédric Colas · Paul Barde · Jelena Luketina · Athul Jacob · Pratyusha Sharma · Edward Grefenstette · Jacob Andreas · Marc-Alexandre Côté

[ Physical ]

Language is one of the most impressive human accomplishments and is believed to be the core to our ability to learn, teach, reason and interact with others. Learning many complex tasks or skills would be significantly more challenging without relying on language to communicate, and language is believed to have a structuring impact on human thought. Written language has also given humans the ability to store information and insights about the world and pass it across generations and continents. Yet, the ability of current state-of-the art reinforcement learning agents to understand natural language is limited.

Practically speaking, the ability to integrate and learn from language, in addition to rewards and demonstrations, has the potential to improve the generalization, scope and sample efficiency of agents. For example, agents that are capable of transferring domain knowledge from textual corpora might be able to much more efficiently explore in a given environment or to perform zero or few shot learning in novel environments. Furthermore, many real-world tasks, including personal assistants and general household robots, require agents to process language by design, whether to enable interaction with humans, or simply use existing interfaces.

To support this field of research, we are interested in fostering …

Alexander Terenin · Elizaveta Semenova · Geoff Pleiss · Zi Wang

[ Physical ]

Peetak Mitra · Maria João Sousa · Mark Roth · Jan Drgona · Emma Strubell · Yoshua Bengio

[ Virtual ]

Awa Dieng · Miriam Rateike · Golnoosh Farnadi · Ferdinando Fioretto · Matt Kusner · Jessica Schrouff

[ Physical ]

As machine learning models permeate every aspect of decision making systems in consequential areas such as healthcare and criminal justice, it has become critical for these models to satisfy trustworthiness desiderata such as fairness, interpretability, accountability, privacy and security. Initially studied in isolation, recent work has emerged at the intersection of these different fields of research, leading to interesting questions on how fairness can be achieved using a causal perspective and under privacy concerns.

Indeed, the field of causal fairness has seen a large expansion in recent years notably as a way to counteract the limitations of initial statistical definitions of fairness. While a causal framing provides flexibility in modelling and mitigating sources of bias using a causal model, proposed approaches rely heavily on assumptions about the data generating process, i.e., the faithfulness and ignorability assumptions. This leads to open discussions on (1) how to fully characterize causal definitions of fairness, (2) how, if possible, to improve the applicability of such definitions, and (3) what constitutes a suitable causal framing of bias from a sociotechnical perspective?

Additionally, while most existing work on causal fairness assumes observed sensitive attribute data, such information is likely to be unavailable due to, for example, …

Fahad Shahbaz Khan · Gul Varol · Salman Khan · Ping Luo · Rao Anwer · Ashish Vaswani · Hisham Cholakkal · Niki Parmar · Joost van de Weijer · Mubarak Shah

[ Virtual ]

Chelsea Finn · Fanny Yang · Hongseok Namkoong · Masashi Sugiyama · Jacob Eisenstein · Jonas Peters · Rebecca Roelofs · Shiori Sagawa · Pang Wei Koh · Yoonho Lee

[ Physical ]

Nick Pawlowski · Jeroen Berrevoets · Caroline Uhler · Kun Zhang · Mihaela van der Schaar · Cheng Zhang

[ Physical ]

Causality has a long history, providing it with many principled approaches to identify a causal effect (or even distill cause from effect). However, these approaches are often restricted to very specific situations, requiring very specific assumptions. This contrasts heavily with recent advances in machine learning. Real-world problems aren’t granted the luxury of making strict assumptions, yet still require causal thinking to solve. Armed with the rigor of causality, and the can-do-attitude of machine learning, we believe the time is ripe to start working towards solving real-world problems.

Sana Tonekaboni · Thomas Hartvigsen · Satya Narayan Shukla · Gunnar Rätsch · Marzyeh Ghassemi · Anna Goldenberg

[ Physical ]

Time series data are ubiquitous in healthcare, from medical time series to wearable data, and present an exciting opportunity for machine learning methods to extract actionable insights about human health. However, huge gap remain between the existing time series literature and what is needed to make machine learning systems practical and deployable for healthcare. This is because learning from time series for health is notoriously challenging: labels are often noisy or missing, data can be multimodal and extremely high dimensional, missing values are pervasive, measurements are irregular, data distributions shift rapidly over time, explaining model outcomes is challenging, and deployed models require careful maintenance over time. These challenges introduce interesting research problems that the community has been actively working on for the last few years, with significant room for contribution still remaining. Learning from time series for health is a uniquely challenging and important area with increasing application. Significant advancements are required to realize the societal benefits of these systems for healthcare. This workshop will bring together machine learning researchers dedicated to advancing the field of time series modeling in healthcare to bring these models closer to deployment.

Frank Schneider · Zachary Nado · Philipp Hennig · George Dahl · Naman Agarwal

[ Physical ]

Workshop Description

Training contemporary neural networks is a lengthy and often costly process, both in human designer time and compute resources. Although the field has invented numerous approaches, neural network training still usually involves an inconvenient amount of “babysitting” to get the model to train properly. This not only requires enormous compute resources but also makes deep learning less accessible to outsiders and newcomers. This workshop will be centered around the question “How can we train neural networks faster” by focusing on the effects algorithms (not hardware or software developments) have on the training time of neural networks. These algorithmic improvements can come in the form of novel methods, e.g. new optimizers or more efficient data selection strategies, or through empirical experience, e.g. best practices for quickly identifying well-working hyperparameter settings or informative metrics to monitor during training.

We all think we know how to train deep neural networks, but we all seem to have different ideas. Ask any deep learning practitioner about the best practices of neural network training, and you will often hear a collection of arcane recipes. Frustratingly, these hacks vary wildly between companies and teams. This workshop offers a platform to talk about these ideas, agree …

Matej Zečević · Devendra Dhami · Christina Winkler · Thomas Kipf · Robert Peharz · Petar Veličković

[ Virtual ]