Skip to yearly menu bar Skip to main content

Mehdi Rezagholizadeh · Peyman Passban · Yue Dong · Lili Mou · Pascal Poupart · Ali Ghodsi · Qun Liu

[ Ballroom C ]

The second version of the Efficient Natural Language and Speech Processing (ENLSP-II) workshop focuses on fundamental and challenging problems to make natural language and speech processing (especially pre-trained models) more efficient in terms of Data, Model, Training, and Inference. The workshop program offers an interactive platform for gathering different experts and talents from academia and industry through invited talks, panel discussion, paper submissions, reviews, interactive
posters, oral presentations and a mentorship program. This will be a unique opportunity to address the efficiency issues of current models, build connections, exchange ideas and brainstorm solutions, and foster future collaborations. The topics of this workshop can be of interest for people working on general machine learning, deep learning, optimization, theory and NLP & Speech applications.

Chen Tang · Karen Leung · Leilani Gilpin · Jiachen Li · Changliu Liu

[ Room 281 - 282 ]

The recent advances in deep learning and artificial intelligence have equipped autonomous agents with increasing intelligence, which enables human-level performance in challenging tasks. In particular, these agents with advanced intelligence have shown great potential in interacting and collaborating with humans (e.g., self-driving cars, industrial robot co-worker, smart homes and domestic robots). However, the opaque nature of deep learning models makes it difficult to decipher the decision-making process of the agents, thus preventing stakeholders from readily trusting the autonomous agents, especially for safety-critical tasks requiring physical human interactions. In this workshop, we bring together experts with diverse and interdisciplinary backgrounds, to build a roadmap for developing and deploying trustworthy interactive autonomous systems at scale. Specifically, we aim to the following questions: 1) What properties are required for building trust between humans and interactive autonomous systems? How can we assess and ensure these properties without compromising the expressiveness of the models and performance of the overall systems? 2) How can we develop and deploy trustworthy autonomous agents under an efficient and trustful workflow? How should we transfer from development to deployment? 3) How to define standard metrics to quantify trustworthiness, from regulatory, theoretical, and experimental perspectives? How do we know that the …

Yi Ding · Yuanqi Du · Tianfan Fu · Hanchen Wang · Anima Anandkumar · Yoshua Bengio · Anthony Gitter · Carla Gomes · Aviv Regev · Max Welling · Marinka Zitnik

[ Room 388 - 390 ]

Mihaela van der Schaar · Zhaozhi Qian · Sergul Aydore · Dimitris Vlitas · Dino Oglic · Tucker Balch

[ Room 288 - 289 ]

Advances in machine learning owe much to the public availability of high-quality benchmark datasets and the well-defined problem settings that they encapsulate. Examples are abundant: CIFAR-10 for image classification, COCO for object detection, SQuAD for question answering, BookCorpus for language modelling, etc. There is a general belief that the accessibility of high-quality benchmark datasets is central to the thriving of our community.

However, three prominent issues affect benchmark datasets: data scarcity, privacy, and bias. They already manifest in many existing benchmarks, and also make the curation and publication of new benchmarks difficult (if not impossible) in numerous high-stakes domains, including healthcare, finance, and education. Hence, although ML holds strong promise in these domains, the lack of high-quality benchmark datasets creates a significant hurdle for the development of methodology and algorithms and leads to missed opportunities.

Synthetic data is a promising solution to the key issues of benchmark dataset curation and publication. Specifically, high-quality synthetic data generation could be done while addressing the following major issues.

1. Data Scarcity. The training and evaluation of ML algorithms require datasets with a sufficient sample size. Note that even if the algorithm can learn from very few samples, we still need sufficient validation data …

Santiago Miret · Marta Skreta · Zamyla Morgan-Chan · Benjamin Sanchez-Lengeling · Shyue Ping Ong · Alan Aspuru-Guzik

[ Room 386 ]

Self-Driving Materials Laboratories have greatly advanced the automation of material design and discovery. They require the integration of diverse fields and consist of three primary components, which intersect with many AI-related research topics:

- AI-Guided Design. This component intersects heavily with algorithmic research at NeurIPS, including (but not limited to) various topic areas such as: Reinforcement Learning and data-driven modeling of physical phenomena using Neural Networks (e.g. Graph Neural Networks and Machine Learning For Physics).

- Automated Chemical Synthesis. This component intersects significantly with robotics research represented at NeurIPS, and includes several parts of real-world robotic systems such as: managing control systems (e.g. Reinforcement Learning) and different sensor modalities (e.g. Computer Vision), as well as predictive models for various phenomena (e.g. Data-Based Prediction of Chemical Reactions).

- Automated Material Characterization. This component intersects heavily with a diverse set of supervised learning techniques that are well-represented at NeurIPS such as: computer vision for microscopy images and automated machine learning based analysis of data generated from different kinds of instruments (e.g. X-Ray based diffraction data for determining material structure).

Albert Berahas · Jelena Diakonikolas · Jarad Forristal · Brandon Reese · Martin Takac · Yan Xu

[ Room 275 - 277 ]

Optimization is a cornerstone of nearly all modern machine learning (ML) and deep learning (DL). Simple first-order gradient-based methods dominate the field for convincing reasons: low computational cost, simplicity of implementation, and strong empirical results.

Yet second- or higher-order methods are rarely used in DL, despite also having many strengths: faster per-iteration convergence, frequent explicit regularization on step-size, and better parallelization than SGD. Additionally, many scientific fields use second-order optimization with great success.

A driving factor for this is the large difference in development effort. By the time higher-order methods were tractable for DL, first-order methods such as SGD and it’s main varients (SGD + Momentum, Adam, …) already had many years of maturity and mass adoption.

The purpose of this workshop is to address this gap, to create an environment where higher-order methods are fairly considered and compared against one-another, and to foster healthy discussion with the end goal of mainstream acceptance of higher-order methods in ML and DL.

Aviral Kumar · Rishabh Agarwal · Aravind Rajeswaran · Wenxuan Zhou · George Tucker · Doina Precup · Aviral Kumar

[ Room 291 - 292 ]

While offline RL focuses on learning solely from fixed datasets, one of the main learning points from the previous edition of offline RL workshop was that large-scale RL applications typically want to use offline RL as part of a bigger system as opposed to being the end-goal in itself. Thus, we propose to shift the focus from algorithm design and offline RL applications to how offline RL can be a launchpad , i.e., a tool or a starting point, for solving challenges in sequential decision-making such as exploration, generalization, transfer, safety, and adaptation. Particularly, we are interested in studying and discussing methods for learning expressive models, policies, skills and value functions from data that can help us make progress towards efficiently tackling these challenges, which are otherwise often intractable.

Submission site: The submission deadline is September 25, 2022 (Anywhere on Earth). Please refer to the submission page for more details.

Mariya Toneva · Javier Turek · Vy Vo · Shailee Jain · Kenneth Norman · Alexander Huth · Uri Hasson · Mihai Capotă

[ Room 397 ]

One of the key challenges for AI is to understand, predict, and model data over time. Pretrained networks should be able to temporally generalize, or adapt to shifts in data distributions that occur over time. Our current state-of-the-art (SOTA) still struggles to model and understand data over long temporal durations – for example, SOTA models are limited to processing several seconds of video, and powerful transformer models are still fundamentally limited by their attention spans. On the other hand, humans and other biological systems are able to flexibly store and update information in memory to comprehend and manipulate multimodal streams of input. Cognitive neuroscientists propose that they do so via the interaction of multiple memory systems with different neural mechanisms. What types of memory systems and mechanisms already exist in our current AI models? First, there are extensions of the classic proposal that memories are formed via synaptic plasticity mechanisms – information can be stored in the static weights of a pre-trained network, or in fast weights that more closely resemble short-term plasticity mechanisms. Then there are persistent memory states, such as those in LSTMs or in external differentiable memory banks, which store information as neural activations that can change …

Shiqiang Wang · Nathalie Baracaldo · Olivia Choudhury · Gauri Joshi · Peter Richtarik · Praneeth Vepakomma · Han Yu

[ Room 298 - 299 ]

Training machine learning models in a centralized fashion often faces significant challenges due to regulatory and privacy concerns in real-world use cases. These include distributed training data, computational resources to create and maintain a central data repository, and regulatory guidelines (GDPR, HIPAA) that restrict sharing sensitive data. Federated learning (FL) is a new paradigm in machine learning that can mitigate these challenges by training a global model using distributed data, without the need for data sharing. The extensive application of machine learning to analyze and draw insight from real-world, distributed, and sensitive data necessitates familiarization with and adoption of this relevant and timely topic among the scientific community.

Despite the advantages of FL, and its successful application in certain industry-based cases, this field is still in its infancy due to new challenges that are imposed by limited visibility of the training data, potential lack of trust among participants training a single model, potential privacy inferences, and in some cases, limited or unreliable connectivity.

The goal of this workshop is to bring together researchers and practitioners interested in FL. This day-long event will facilitate interaction among students, scholars, and industry professionals from around the world to understand the topic, identify technical …

Yann Dauphin · David Lopez-Paz · Vikas Verma · Boyi Li

[ Room 393 ]


Interpolation regularizers are an increasingly popular approach to regularize deep models. For example, the mixup data augmentation method constructs synthetic examples by linearly interpolating random pairs of training data points. During their half-decade lifespan, interpolation regularizers have become ubiquitous and fuel state-of-the-art results in virtually all domains, including computer vision and medical diagnosis. This workshop brings together researchers and users of interpolation regularizers to foster research and discussion to advance and understand interpolation regularizers. This inaugural meeting will have no shortage of interactions and energy to achieve these exciting goals. Suggested topics include, but are not limited to the intersection between interpolation regularizers and:

* Domain generalization
* Semi-supervised learning
* Privacy-preserving ML
* Theory
* Robustness
* Fairness
* Vision
* Medical applications

## Important dates

* Paper submission deadline: September 22, 2022
* Paper acceptance notification: October 14, 2022
* Workshop: December 2, 2022

## Call for papers

Authors are invited to submit short papers with up to 4 pages, but unlimited number of pages for references and supplementary materials. The submissions must be anonymized as the reviewing process will be double-blind. Please use the NeurIPS template for submissions. We welcome submissions that have been …

Shanghang Zhang · Hao Dong · Wei Pan · Pradeep Ravikumar · Vittorio Ferrari · Fisher Yu · Xin Wang · Zihan Ding

[ Room 396 ]

Recent years have witnessed the rising need for machine learning systems that can interact with humans in the learning loop. Such systems can be applied to computer vision, natural language processing, robotics, and human-computer interaction. Creating and running such systems call for interdisciplinary research of artificial intelligence, machine learning, and software engineering design, which we abstract as Human in the Loop Learning (HiLL).

The HiLL workshop aims to bring together researchers and practitioners working on the broad areas of HiLL, ranging from interactive/active learning algorithms for real-world decision-making systems (e.g., autonomous driving vehicles, robotic systems, etc.), human-inspired learning that mitigates the gap between human intelligence and machine intelligence, human-machine collaborative learning that creates a more powerful learning system, lifelong learning that transfers knowledge to learn new tasks over a lifetime, as well as interactive system designs (e.g., data visualization, annotation systems, etc.).

The HiLL workshop continues the previous effort to provide a platform for researchers from interdisciplinary areas to share their recent research. In this year’s workshop, a special feature is to encourage the discussion on the interactive and collaborative learning between human and machine learning agents: Can they be organically combined to create a more powerful learning system? We …

Nick Pawlowski · Jeroen Berrevoets · Caroline Uhler · Kun Zhang · Mihaela van der Schaar · Cheng Zhang

[ Room 295 - 296 ]

Causality has a long history, providing it with many principled approaches to identify a causal effect (or even distill cause from effect). However, these approaches are often restricted to very specific situations, requiring very specific assumptions. This contrasts heavily with recent advances in machine learning. Real-world problems aren’t granted the luxury of making strict assumptions, yet still require causal thinking to solve. Armed with the rigor of causality, and the can-do-attitude of machine learning, we believe the time is ripe to start working towards solving real-world problems.

Frank Schneider · Zachary Nado · Philipp Hennig · George Dahl · Naman Agarwal

[ Theater B ]

Workshop Description

Training contemporary neural networks is a lengthy and often costly process, both in human designer time and compute resources. Although the field has invented numerous approaches, neural network training still usually involves an inconvenient amount of “babysitting” to get the model to train properly. This not only requires enormous compute resources but also makes deep learning less accessible to outsiders and newcomers. This workshop will be centered around the question “How can we train neural networks faster” by focusing on the effects algorithms (not hardware or software developments) have on the training time of neural networks. These algorithmic improvements can come in the form of novel methods, e.g. new optimizers or more efficient data selection strategies, or through empirical experience, e.g. best practices for quickly identifying well-working hyperparameter settings or informative metrics to monitor during training.

We all think we know how to train deep neural networks, but we all seem to have different ideas. Ask any deep learning practitioner about the best practices of neural network training, and you will often hear a collection of arcane recipes. Frustratingly, these hacks vary wildly between companies and teams. This workshop offers a platform to talk about these ideas, agree …

Madelon Hulsebos · Bojan Karlaš · Pengcheng Yin · haoyu dong

[ Room 398 ]

We develop large models to “understand” images, videos and natural language that fuel many intelligent applications from text completion to self-driving cars. But tabular data has long been overlooked despite its dominant presence in data-intensive systems. By learning latent representations from (semi-)structured tabular data, pretrained table models have shown preliminary but impressive performance for semantic parsing, question answering, table understanding, and data preparation. Considering that such tasks share fundamental properties inherent to tables, representation learning for tabular data is an important direction to explore further. These works also surfaced many open challenges such as finding effective data encodings, pretraining objectives and downstream tasks.

Key questions that we aim to address in this workshop are:
- How should tabular data be encoded to make learned Table Models generalize across tasks?
- Which pre-training objectives, architectures, fine-tuning and prompting strategies, work for tabular data?
- How should the varying formats, data types, and sizes of tables be handled?
- To what extend can Language Models be adapted towards tabular data tasks and what are their limits?
- What tasks can existing Table Models accomplish well and what opportunities lie ahead?
- How do existing Table Models perform, what do they learn, where …

Laetitia Teodorescu · Laura Ruis · Tristan Karch · Cédric Colas · Paul Barde · Jelena Luketina · Athul Jacob · Pratyusha Sharma · Edward Grefenstette · Jacob Andreas · Marc-Alexandre Côté

[ Room 391 ]

Language is one of the most impressive human accomplishments and is believed to be the core to our ability to learn, teach, reason and interact with others. Learning many complex tasks or skills would be significantly more challenging without relying on language to communicate, and language is believed to have a structuring impact on human thought. Written language has also given humans the ability to store information and insights about the world and pass it across generations and continents. Yet, the ability of current state-of-the art reinforcement learning agents to understand natural language is limited.

Practically speaking, the ability to integrate and learn from language, in addition to rewards and demonstrations, has the potential to improve the generalization, scope and sample efficiency of agents. For example, agents that are capable of transferring domain knowledge from textual corpora might be able to much more efficiently explore in a given environment or to perform zero or few shot learning in novel environments. Furthermore, many real-world tasks, including personal assistants and general household robots, require agents to process language by design, whether to enable interaction with humans, or simply use existing interfaces.

To support this field of research, we are interested in fostering …

Jiaxuan You · Marinka Zitnik · Rex Ying · Yizhou Sun · Hanjun Dai · Stefanie Jegelka

[ Theater A ]

Background. In recent years, graph learning has quickly grown into an established sub-field of machine learning. Researchers have been focusing on developing novel model architectures, theoretical understandings, scalable algorithms and systems, and successful applications across industry and science regarding graph learning. In fact, more than 5000 research papers related to graph learning have been published over the past year alone.

Challenges. Despite the success, existing graph learning paradigms have not captured the full spectrum of relationships in the physical and the virtual worlds. For example, in terms of applicability of graph learning algorithms, current graph learning paradigms are often restricted to datasets with explicit graph representations, whereas recent works have shown promise of graph learning methods for applications without explicit graph representations. In terms of usability, while popular graph learning libraries greatly facilitate the implementation of graph learning techniques, finding the right graph representation and model architecture for a given use case still requires heavy expert knowledge. Furthermore, in terms of generalizability, unlike domains such as computer vision and natural language processing where large-scale pre-trained models generalize across downstream applications with little to no fine-tuning and demonstrate impressive performance, such a paradigm has yet to succeed in the graph learning …

Arturo Deza · Joshua Peterson · N Apurva Ratan Murty · Tom Griffiths

[ Room 394-395 ]

Yingzhen Li · Yang Song · Valentin De Bortoli · Francois-Xavier Briol · Wenbo Gong · Alexia Jolicoeur-Martineau · Arash Vahdat

[ Room 293 - 294 ]

The score function, which is the gradient of the log-density, provides a unique way to represent probability distributions. By working with distributions through score functions, researchers have been able to develop efficient tools for machine learning and statistics, collectively known as score-based methods.

Score-based methods have had a significant impact on vastly disjointed subfields of machine learning and statistics, such as generative modeling, Bayesian inference, hypothesis testing, control variates and Stein’s methods. For example, score-based generative models, or denoising diffusion models, have emerged as the state-of-the-art technique for generating high quality and diverse images. In addition, recent developments in Stein’s method and score-based approaches for stochastic differential equations (SDEs) have contributed to the developement of fast and robust Bayesian posterior inference in high dimensions. These have potential applications in engineering fields, where they could help improve simulation models.

At our workshop, we will bring together researchers from these various subfields to discuss the success of score-based methods, and identify common challenges across different research areas. We will also explore the potential for applying score-based methods to even more real-world applications, including in computer vision, signal processing, and computational chemistry. By doing so, we hope to folster collaboration among researchers and …

DOU QI · Konstantinos Kamnitsas · Yuankai Huo · Xiaoxiao Li · Daniel Moyer · Danielle Pace · Jonas Teuwen · Islem Rekik

[ Room 283 - 285 ]

'Medical Imaging meets NeurIPS' is a satellite workshop established in 2017. The workshop aims to bring researchers together from the medical image computing and machine learning communities. The objective is to discuss the major challenges in the field and opportunities for joining forces. This year the workshop will feature online oral and poster sessions with an emphasis on audience interactions. In addition, there will be a series of high-profile invited speakers from industry, academia, engineering and medical sciences giving an overview of recent advances, challenges, latest technology and efforts for sharing clinical data.

Alexander Terenin · Elizaveta Semenova · Geoff Pleiss · Zi Wang

[ Room 387 ]

In recent years, the growth of decision-making applications, where principled handling of uncertainty is of key concern, has led to increased interest in Bayesian techniques. By offering the capacity to assess and propagate uncertainty in a principled manner, Gaussian processes have become a key technique in areas such as Bayesian optimization, active learning, and probabilistic modeling of dynamical systems. In parallel, the need for uncertainty-aware modeling of quantities that vary over space and time has led to large-scale deployment of Gaussian processes, particularly in application areas such as epidemiology. In this workshop, we bring together researchers from different communities to share ideas and success stories. By showcasing key applied challenges, along with recent theoretical advances, we hope to foster connections and prompt fruitful discussion. We invite researchers to submit extended abstracts for contributed talks and posters.

Nathan Ng · Haoran Zhang · Vinith Suriyakumar · Chantal Shaib · Kyunghyun Cho · Yixuan Li · Alice Oh · Marzyeh Ghassemi

[ Room 290 ]

As machine learning models find increasing use in the real world, ensuring their safe and reliable deployment depends on ensuring their robustness to distribution shift. This is especially true for sequential data, which occurs naturally in various data domains such as natural language processing, healthcare, computational biology, and finance. However, building models for sequence data which are robust to distribution shifts presents a unique challenge. Sequential data are often discrete rather than continuous, exhibit difficult to characterize distributions, and can display a much greater range of types of distributional shifts. Although many methods for improving model robustness exist for imaging or tabular data, extending these methods to sequential data is a challenging research direction that often requires fundamentally different techniques.

This workshop aims to facilitate progress towards improving the distributional robustness of models trained on sequential data by bringing together researchers to tackle a wide variety of research questions including, but not limited to:
(1) How well do existing robustness methods work on sequential data, and why do they succeed or fail?
(2) How can we leverage the sequential nature of the data to develop novel and distributionally robust methods?
(3) How do we construct and utilize formalisms for distribution …

Huaxiu Yao · Eleni Triantafillou · Fabio Ferreira · Joaquin Vanschoren · Qi Lei

[ Theater C ]

Recent years have seen rapid progress in meta-learning methods, which transfer knowledge across tasks and domains to efficiently learn new tasks, optimize the learning process itself, and even generate new learning methods from scratch. Meta-learning can be seen as the logical conclusion of the arc that machine learning has undergone in the last decade, from learning classifiers, to learning representations, and finally to learning algorithms that themselves acquire representations, classifiers, and policies for acting in environments. In practice, meta-learning has been shown to yield new state-of-the-art automated machine learning methods, novel deep learning architectures, and substantially improved one-shot learning systems. Moreover, improving one’s own learning capabilities through experience can also be viewed as a hallmark of intelligent beings, and neuroscience shows a strong connection between human and reward learning and the growing sub-field of meta-reinforcement learning.

Some of the fundamental questions that this workshop aims to address are:
- What are the meta-learning processes in nature (e.g., in humans), and how can we take inspiration from them?
- What is the relationship between meta-learning, continual learning, and transfer learning?
- What interactions exist between meta-learning and large pretrained / foundation models?
- What principles can we learn from meta-learning to …

Abhijat Biswas · Akanksha Saran · Khimya Khetarpal · Reuben Aronson · Ruohan Zhang · Grace Lindsay · Scott Niekum

[ Room 399 ]

Attention is a widely popular topic studied in many fields such as neuroscience, psychology, and machine learning. A better understanding and conceptualization of attention in both humans and machines has led to significant progress across fields. At the same time, attention is far from a clear or unified concept, with many definitions within and across multiple fields.

Cognitive scientists study how the brain flexibly controls its limited computational resources to accomplish its objectives. Inspired by cognitive attention, machine learning researchers introduce attention as an inductive bias in their models to improve performance or interpretability. Human-computer interaction designers monitor people’s attention during interactions to implicitly detect aspects of their mental states.

While the aforementioned research areas all consider attention, each formalizes and operationalizes it in different ways. Bridging this gap will facilitate:
- (Cogsci for AI) More principled forms of attention in AI agents towards more human-like abilities such as robust generalization, quicker learning and faster planning.
- (AI for cogsci) Developing better computational models for modeling human behaviors that involve attention.
- (HCI) Modeling attention during interactions from implicit signals for fluent and efficient coordination
- (HCI/ML) Artificial models of algorithmic attention to enable intuitive interpretations of deep models?

Sana Tonekaboni · Thomas Hartvigsen · Satya Narayan Shukla · Gunnar Rätsch · Marzyeh Ghassemi · Anna Goldenberg

[ Room 392 ]

Time series data are ubiquitous in healthcare, from medical time series to wearable data, and present an exciting opportunity for machine learning methods to extract actionable insights about human health. However, huge gap remain between the existing time series literature and what is needed to make machine learning systems practical and deployable for healthcare. This is because learning from time series for health is notoriously challenging: labels are often noisy or missing, data can be multimodal and extremely high dimensional, missing values are pervasive, measurements are irregular, data distributions shift rapidly over time, explaining model outcomes is challenging, and deployed models require careful maintenance over time. These challenges introduce interesting research problems that the community has been actively working on for the last few years, with significant room for contribution still remaining. Learning from time series for health is a uniquely challenging and important area with increasing application. Significant advancements are required to realize the societal benefits of these systems for healthcare. This workshop will bring together machine learning researchers dedicated to advancing the field of time series modeling in healthcare to bring these models closer to deployment.

Yuxi Li · Emma Brunskill · MINMIN CHEN · Omer Gottesman · Lihong Li · Yao Liu · Zhiwei Tony Qin · Matthew Taylor

[ Theater A ]

Discover how to improve the adoption of RL in practice, by discussing key research problems, SOTA, and success stories / insights / lessons w.r.t. practical RL algorithms, practical issues, and applications with leading experts from both academia and industry @ NeurIPS 2022 RL4RealLife workshop.

Reihaneh Rabbany · Jian Tang · Michael Bronstein · Shenyang Huang · Meng Qu · Kellin Pelrine · Jianan Zhao · Farimah Poursafaei · Aarash Feizi

[ Room 399 ]

This workshop bridges the conversation among different areas such as temporal knowledge graph learning, graph anomaly detection, and graph representation learning. It aims to share understanding and techniques to facilitate the development of novel temporal graph learning methods. It also brings together researchers from both academia and industry and connects researchers from various fields aiming to span theories, methodologies, and applications.

Awa Dieng · Miriam Rateike · Golnoosh Farnadi · Ferdinando Fioretto · Matt Kusner · Jessica Schrouff

[ Room 392 ]

As machine learning models permeate every aspect of decision making systems in consequential areas such as healthcare and criminal justice, it has become critical for these models to satisfy trustworthiness desiderata such as fairness, interpretability, accountability, privacy and security. Initially studied in isolation, recent work has emerged at the intersection of these different fields of research, leading to interesting questions on how fairness can be achieved using a causal perspective and under privacy concerns.

Indeed, the field of causal fairness has seen a large expansion in recent years notably as a way to counteract the limitations of initial statistical definitions of fairness. While a causal framing provides flexibility in modelling and mitigating sources of bias using a causal model, proposed approaches rely heavily on assumptions about the data generating process, i.e., the faithfulness and ignorability assumptions. This leads to open discussions on (1) how to fully characterize causal definitions of fairness, (2) how, if possible, to improve the applicability of such definitions, and (3) what constitutes a suitable causal framing of bias from a sociotechnical perspective?

Additionally, while most existing work on causal fairness assumes observed sensitive attribute data, such information is likely to be unavailable due to, for example, …

Ismini Lourentzou · Joy T Wu · Satyananda Kashyap · Alexandros Karargyris · Leo Anthony Celi · Ban Kawas · Sachin S Talathi

[ Room 386 ]

Eye gaze has proven to be a cost-efficient way to collect large-scale physiological data that can reveal the underlying human attentional patterns in real-life workflows, and thus has long been explored as a signal to directly measure human-related cognition in various domains. Physiological data (including but not limited to eye gaze) offer new perception capabilities, which could be used in several ML domains, e.g., egocentric perception, embodied AI, NLP, etc. They can help infer human perception, intentions, beliefs, goals, and other cognition properties that are much needed for human-AI interactions and agent coordination. In addition, large collections of eye-tracking data have enabled data-driven modeling of human visual attention mechanisms, both for saliency or scanpath prediction, with twofold advantages: from the neuroscientific perspective to understand biological mechanisms better, and from the AI perspective to equip agents with the ability to mimic or predict human behavior and improve interpretability and interactions.

With the emergence of immersive technologies, now more than any time there is a need for experts of various backgrounds (e.g., machine learning, vision, and neuroscience communities) to share expertise and contribute to a deeper understanding of the intricacies of cost-efficient human supervision signals (e.g., eye-gaze) and their utilization towards by …

Atilim Gunes Baydin · Adji Bousso Dieng · Emine Kucukbenli · Gilles Louppe · Siddharth Mishra-Sharma · Benjamin Nachman · Brian Nord · Savannah Thais · Anima Anandkumar · Kyle Cranmer · Lenka Zdeborová · Rianne van den Berg

[ Room 275 - 277 ]

The Machine Learning and the Physical Sciences workshop aims to provide an informal, inclusive and leading-edge venue for research and discussions at the interface of machine learning (ML) and the physical sciences. This interface spans (1) applications of ML in physical sciences (ML for physics), (2) developments in ML motivated by physical insights (physics for ML), and most recently (3) convergence of ML and physical sciences (physics with ML) which inspires questioning what scientific understanding means in the age of complex-AI powered science, and what roles machine and human scientists will play in developing scientific understanding in the future.

Sophia Sanborn · Christian A Shewmake · Simone Azeglio · Arianna Di Bernardo · Nina Miolane

[ Room 283 - 285 ]

In recent years, there has been a growing appreciation for the importance of modeling the geometric structure in data — a perspective that has developed in both the geometric deep learning and applied geometry communities. In parallel, an emerging set of findings in neuroscience suggests that group-equivariance and the preservation of geometry and topology may be fundamental principles of neural coding in biology.

This workshop will bring together researchers from geometric deep learning and geometric statistics with theoretical and empirical neuroscientists whose work reveals the elegant implementation of geometric structure in biological neural circuitry. Group theory and geometry were instrumental in unifying models of fundamental forces and elementary particles in 20th-century physics. Likewise, they have the potential to unify our understanding of how neural systems form useful representations of the world.

The goal of this workshop is to unify the emerging paradigm shifts towards structured representations in deep networks and the geometric modeling of neural data — while promoting a solid mathematical foundation in algebra, geometry, and topology.

Arno Blaas · Sahra Ghalebikesabi · Javier Antorán · Fan Feng · Melanie F. Pradier · Ian Mason · David Rohde

[ Ballroom C ]

Deep learning has flourished in the last decade. Recent breakthroughs have shown stunning results, and yet, researchers still cannot fully explain why neural networks generalise so well or why some architectures or optimizers work better than others. There is a lack of understanding of existing deep learning systems, which led NeurIPS 2017 test of time award winners Rahimi & Recht to compare machine learning with alchemy and to call for the return of the 'rigour police'.

Despite excellent theoretical work in the field, deep neural networks are so complex that they might not be able to be fully comprehended with theory alone. Unfortunately, the experimental alternative - rigorous work that neither proves a theorem nor proposes a new method - is currently under-valued in the machine learning community.

To change this, this workshop aims to promote the method of empirical falsification.

We solicit contributions which explicitly formulate a hypothesis related to deep learning or its applications (based on first principles or prior work), and then empirically falsify it through experiments. We further encourage submissions to go a layer deeper and investigate the causes of an initial idea not working as expected. This workshop will showcase how negative results offer important …

Ishan Misra · Pengtao Xie · Gul Varol · Yale Song · Yuki Asano · Xiaolong Wang · Pauline Luc

[ Room 391 ]

Ritwik Gupta · Robin Murphy · Eric Heim · Guido Zarrella · Caleb Robinson

[ Room 398 ]

Humanitarian crises from disease outbreak to war to oppression against disadvantaged groups have threatened people and their communities throughout history. Natural disasters are a single, extreme example of such crises. In the wake of hurricanes, earthquakes, and other such crises, people have ceaselessly sought ways--often harnessing innovation--to provide assistance to victims after disasters have struck.

Through this workshop, we intend to establish meaningful dialogue between the Artificial Intelligence (AI) and Humanitarian Assistance and Disaster Response (HADR) communities. By the end of the workshop, the NeurIPS research community can learn the practical challenges of aiding those in crisis, while the HADR community can get to know the state of art and practice in AI. We seek to establish a pipeline of transitioning the research created by the NeurIPS community to real-world humanitarian issues. We believe such an endeavor is possible due to recent successes in applying techniques from various AI and Machine Learning (ML) disciplines to HADR.

Jiachen Li · Nigamaa Nayakanti · Xinshuo Weng · Daniel Omeiza · Ali Baheri · German Ros · Rowan McAllister

[ Theater B ]

Welcome to the NeurIPS 2022 Workshop on Machine Learning for Autonomous Driving!

Autonomous vehicles (AVs) offer a rich source of high-impact research problems for the machine learning (ML) community; including perception, state estimation, probabilistic modeling, time series forecasting, gesture recognition, robustness guarantees, real-time constraints, user-machine communication, multi-agent planning, and intelligent infrastructure. Further, the interaction between ML subfields towards a common goal of autonomous driving can catalyze interesting inter-field discussions that spark new avenues of research, which this workshop aims to promote. As an application of ML, autonomous driving has the potential to greatly improve society by reducing road accidents, giving independence to those unable to drive, and even inspiring younger generations with tangible examples of ML-based technology clearly visible on local streets. All are welcome to attend! This will be the 7th NeurIPS workshop in this series. Previous workshops in 2016, 2017, 2018, 2019, 2020, and 2021 enjoyed wide participation from both academia and industry.

Noga Zaslavsky · Mycal Tucker · Sarah Marzen · Irina Higgins · Stephanie Palmer · Samuel J Gershman

[ Room 281 - 282 ]

Many cognitive and neural systems can be described in terms of compression and transmission of information given bounded resources. While information theory, as a principled mathematical framework for characterizing such systems, has been widely applied in neuroscience and machine learning, its role in understanding cognition has traditionally been contested. This traditional view has been changing in recent years, with growing evidence that information-theoretic optimality principles underlie a wide range of cognitive functions, including perception, working memory, language, and decision making. In parallel, there has also been a surge of contemporary information-theoretic approaches in machine learning, enabling large-scale neural-network implementation of information-theoretic models.

These scientific and technological developments open up new avenues for progress toward an integrative computational theory of human and artificial cognition, by leveraging information-theoretic principles as bridges between various cognitive functions and neural representations. This workshop aims to explore these new research directions and bring together researchers from machine learning, cognitive science, neuroscience, linguistics, economics, and potentially other fields, who are interested in integrating information-theoretic approaches that have thus far been studied largely independently of each other. In particular, we aim to discuss questions and exchange ideas along the following directions:

- Understanding human cognition: To what extent …

Roshan Rao · Jonas Adler · Namrata Anand · John Ingraham · Sergey Ovchinnikov · Ellen Zhong

[ Room 288 - 289 ]

In only a few years, structural biology, the study of the 3D structure or shape of proteins and other biomolecules, has been transformed by breakthroughs from machine learning algorithms. Machine learning models are now routinely being used by experimentalists to predict structures that can help answer real biological questions (e.g. AlphaFold), accelerate the experimental process of structure determination (e.g. computer vision algorithms for cryo-electron microscopy), and have become a new industry standard for bioengineering new protein therapeutics (e.g. large language models for protein design). Despite all this progress, there are still many active and open challenges for the field, such as modeling protein dynamics, predicting higher order complexes, pushing towards generalization of protein folding physics, and relating the structure of proteins to the in vivo and contextual nature of their underlying function. These challenges are diverse and interdisciplinary, motivating new kinds of machine learning systems and requiring the development and maturation of standard benchmarks and datasets.

In this exciting time for the field, our workshop, “Machine Learning in Structural Biology” (MLSB), seeks to bring together relevant experts, practitioners, and students across a broad community to focus on these challenges and opportunities. We believe the union of these communities, including the …

Neel Kant · Martin Maas · Azade Nova · Benoit Steiner · Xinlei XU · Dan Zhang

[ Room 396 ]

Machine Learning (ML) for Systems is an important direction for applying ML in the real world. It has been shown that ML can replace long standing heuristics in computer systems by leveraging supervised learning and reinforcement learning (RL) approaches. The computer systems community recognizes the importance of ML in tackling strenuous multi-objective tasks such as designing new data structures 1, integrated circuits 2,3, or schedulers, as well as implementing control algorithms for applications such as compilers 12,13, databases 8, memory management 9,10 or ML frameworks 6.

General Workshop Direction. This is the fifth iteration of this workshop. In previous editions, we showcased approaches and frameworks to solve problems, bringing together researchers and practitioners at NeurIPS from both ML and systems communities. While breaking new grounds, we encouraged collaborations and development in a broad range of ML for Systems works, many later published in top-tier conferences 6,13,14,15,16,17,18. This year, we plan to continue on this path while expanding our call for paper to encourage emerging works on minimizing energy footprint, reaching carbon neutrality, and using machine learning for system security and privacy.

Focusing the Workshop on Unifying Works. As the field of ML for Systems is maturing, we are adapting the …

Jian Lou · Zhiguang Wang · Chejian Xu · Bo Li · Dawn Song

[ Room 298 - 299 ]

Recent rapid development of machine learning has largely benefited from algorithmic advances, collection of large-scale datasets, and availability of high-performance computation resources, among others. However, the large volume of collected data and massive information may also bring serious security, privacy, services provisioning, and network management challenges. In order to achieve decentralized, secure, private, and trustworthy machine learning operation and data management in this “data-centric AI” era, the joint consideration of blockchain techniques and machine learning may bring significant benefits and have attracted great interest from both academia and industry. On the one hand, decentralization and blockchain techniques can significantly facilitate training data and machine learning model sharing, decentralized intelligence, security, privacy, and trusted decision-making. On the other hand, Web3 platforms and applications, which are built on blockchain technologies and token-based economics, will greatly benefit from machine learning techniques in resource efficiency, scalability, trustworthy machine learning, and other ML-augmented tools for creators and participants in the end-to-end ecosystems.

This workshop focuses on how future researchers and practitioners should prepare themselves to achieve different trustworthiness requirements, such as security and privacy in machine learning through decentralization and blockchain techniques, as well as how to leverage machine learning techniques to automate some processes …

Sara Hooker · Rosanne Liu · Pablo Samuel Castro · FatemehSadat Mireshghallah · Sunipa Dev · Benjamin Rosman · João Madeira Araújo · Savannah Thais · Sara Hooker · Sunny Sanyal · Tejumade Afonja · Swapneel Mehta · Tyler Zhu

[ Room 394-395 ]

This workshop aims to discuss the challenges and opportunities of expanding research collaborations in light of the changing landscape of where, how, and by whom research is produced. Progress toward democratizing AI research has been centered around making knowledge (e.g. class materials), established ideas (e.g. papers), and technologies (e.g. code, compute) more accessible. However, open, online resources are only part of the equation. Growth as a researcher requires not only learning by consuming information individually, but hands-on practice whiteboarding, coding, plotting, debugging, and writing collaboratively, with either mentors or peers. Of course, making "collaborators" more universally accessible is fundamentally more difficult than, say, ensuring all can access arXiv papers because scaling people and research groups is much harder than scaling websites. Can we nevertheless make access to collaboration itself more open?

Alon Albalak · Colin Raffel · Chunting Zhou · Deepak Ramachandran · Xuezhe Ma · Sebastian Ruder

[ Theater C ]

Transfer learning from large pre-trained language models (PLM) has become the de-facto method for a wide range of natural language processing tasks. Current transfer learning methods, combined with PLMs, have seen outstanding successes in transferring knowledge to new tasks, domains, and even languages. However, existing methods, including fine-tuning, in-context learning, parameter-efficient tuning, semi-parametric models with knowledge augmentation, etc., still lack consistently good performance across different tasks, domains, varying sizes of data resources, and diverse textual inputs.

This workshop aims to invite researchers from different backgrounds to share their latest work in efficient and robust transfer learning methods, discuss challenges and risks of transfer learning models when deployed in the wild, understand positive and negative transfer, and also debate over future directions.

Mengjiao (Sherry) Yang · Yilun Du · Jack Parker-Holder · Siddharth Karamcheti · Igor Mordatch · Shixiang (Shane) Gu · Ofir Nachum

[ Room 291 - 292 ]

Humans acquire vision, language, and decision making abilities through years of experience, arguably corresponding to millions of video frames, audio clips, and interactions with the world. Following this data-driven approach, recent foundation models trained on large and diverse datasets have demonstrated emergent capabilities and fast adaptation to a wide range of downstream vision and language tasks (e.g., BERT, DALL-E, GPT-3, CLIP). Meanwhile in the decision making and reinforcement learning (RL) literature, foundation models have yet to fundamentally shift the traditional paradigm in which an agent learns from its own or others’ collected experience, typically on a single-task and with limited prior knowledge. Nevertheless, there has been a growing body of foundation-model-inspired research in decision making that often involves collecting large amounts of interactive data for self-supervised learning at scale. For instance, foundation models such as BERT and GPT-3 have been applied to modeling trajectory sequences of agent experience, and ever-larger datasets have been curated for learning multimodel, multitask, and generalist agents. These works demonstrate the potential benefits of foundation models on a broad set of decision making applications such as autonomous driving, healthcare systems, robotics, goal-oriented dialogue, robotics, and recommendation systems.

Despite early signs of success, foundation models for decision …

Pan Lu · Swaroop Mishra · Sean Welleck · Yuhuai Wu · Hannaneh Hajishirzi · Percy Liang

[ Room 293 - 294 ]

Mathematical reasoning is a unique aspect of human intelligence and a fundamental building block for scientific and intellectual pursuits. However, learning mathematics is often a challenging human endeavor that relies on expert instructors to create, teach and evaluate mathematical material. From an educational perspective, AI systems that aid in this process offer increased inclusion and accessibility, efficiency, and understanding of mathematics. Moreover, building systems capable of understanding, creating, and using mathematics offers a unique setting for studying reasoning in AI. This workshop will investigate the intersection of mathematics education and AI.

Courtney Paquette · Sebastian Stich · Quanquan Gu · Cristóbal Guzmán · John Duchi

[ Room 295 - 296 ]

OPT 2022 will bring experts in optimization to share their perspectives while leveraging crossover experts in ML to share their views and recent advances. OPT 2022 honors this tradition of bringing together people from optimization and from ML in order to promote and generate new interactions between the two communities.

To foster the spirit of innovation and collaboration, a goal of this workshop, OPT 2022 will focus the contributed talks on research in Reliable Optimization Methods for ML. Many optimization algorithms for ML were originally developed with the goal of handling computational constraints (e.g., stochastic gradient based algorithms). Moreover, the analyses of these algorithms followed the classical optimization approach where one measures the performances of algorithms based on (i) the computation cost and (ii) convergence for any input into the algorithm. As engineering capabilities increase and the wide adoption of ML into many real world usages, practitioners of ML are seeking optimization algorithms that go beyond finding the minimizer with the fastest algorithm. They want reliable methods that solve real-world complications that arise. For example, increasingly bad actors are attempting to fool models with deceptive data. This leads to questions such as what algorithms are more robust to adversarial …

Chelsea Finn · Fanny Yang · Hongseok Namkoong · Masashi Sugiyama · Jacob Eisenstein · Jonas Peters · Rebecca Roelofs · Shiori Sagawa · Pang Wei Koh · Yoonho Lee

[ Room 388 - 390 ]

This workshop brings together domain experts and ML researchers working on mitigating distribution shifts in real-world applications.

Distribution shifts—where a model is deployed on a data distribution different from what it was trained on—pose significant robustness challenges in real-world ML applications. Such shifts are often unavoidable in the wild and have been shown to substantially degrade model performance in applications such as biomedicine, wildlife conservation, sustainable development, robotics, education, and criminal justice. For example, models can systematically fail when tested on patients from different hospitals or people from different demographics.

This workshop aims to convene a diverse set of domain experts and methods-oriented researchers working on distribution shifts. We are broadly interested in methods, evaluations and benchmarks, and theory for distribution shifts, and we are especially interested in work on distribution shifts that arise naturally in real-world application contexts. Examples of relevant topics include, but are not limited to:
- Examples of real-world distribution shifts in various application areas. We especially welcome applications that are not widely discussed in the ML research community, e.g., education, sustainable development, and conservation. We encourage submissions that characterize distribution shifts and their effects in real-world applications; it is not at all necessary to propose …

Kianté Brantley · Soham Dan · Ji Ung Lee · Khanh Nguyen · Edwin Simpson · Alane Suhr · Yoav Artzi

[ Room 397 ]

Interactive machine learning studies algorithms that learn from data collected through interaction with either a computational or human agent in a shared environment, through feedback on model decisions. In contrast to the common paradigm of supervised learning, IML does not assume access to pre-collected labeled data, thereby decreasing data costs. Instead, it allows systems to improve over time, empowering non-expert users to provide feedback. IML has seen wide success in areas such as video games and recommendation systems.
Although most downstream applications of NLP involve interactions with humans - e.g., via labels, demonstrations, corrections, or evaluation - common NLP models are not built to learn from or adapt to users through interaction. There remains a large research gap that must be closed to enable NLP systems that adapt on-the-fly to the changing needs of humans and dynamic environments through interaction.

Sören Becker · Alexis Bellot · Cecilia Casolo · Niki Kilbertus · Sara Magliacane · Yuyang (Bernie) Wang

[ Room 387 ]

Divyansh Kaushik · Jennifer Hsia · Jessica Huynh · Yonadav Shavit · Samuel Bowman · Ting-Hao Huang · Douwe Kiela · Zachary Lipton · Eric Michael Smith

[ Room 290 ]

Fahad Shahbaz Khan · Gul Varol · Salman Khan · Ping Luo · Rao Anwer · Ashish Vaswani · Hisham Cholakkal · Niki Parmar · Joost van de Weijer · Mubarak Shah

[ Virtual ]

Transformer models have demonstrated excellent performance on a diverse set of computer vision applications ranging from classification to segmentation on various data modalities such as images, videos, and 3D data. The goal of this workshop is to bring together computer vision and machine learning researchers working towards advancing the theory, architecture, and algorithmic design for vision transformer models, as well as the practitioners utilizing transformer models for novel applications and use cases.

The workshop’s motivation is to narrow the gap between the research advancements in transformer designs and applications utilizing transformers for various computer vision applications. The workshop also aims to widen the adaptation of transformer models for various vision-related industrial applications. We are interested in papers reporting their experimental results on the utilization of transformers for any application of computer vision, challenges they have faced, and their mitigation strategy on topics like, but not limited to image classification, object detection, segmentation, human-object interaction detection, scene understanding based on 3D, video, and multimodal inputs.

Alessandra Tosi · Andrei Paleyes · Christian Cabrera · Fariba Yousefi · S Roberts

[ Virtual ]

The goal of this event is to bring together people from different communities with the common interest in the Deployment of Machine Learning Systems.

With the dramatic rise of companies dedicated to providing Machine Learning software-as-a-service tools, Machine Learning has become a tool for solving real world problems that is increasingly more accessible in many industrial and social sectors. With the growth in number of deployments, also grows the number of known challenges and hurdles that practitioners face along the deployment process to ensure the continual delivery of good performance from deployed Machine Learning systems. Such challenges can lie in adoption of ML algorithms to concrete use cases, discovery and quality of data, maintenance of production ML systems, as well as ethics.

Matej Zečević · Devendra Dhami · Christina Winkler · Thomas Kipf · Robert Peharz · Petar Veličković

[ Virtual ]

Understanding causal interactions is central to human cognition and thereby a central quest in science, engineering, business, and law. Developmental psychology has shown that children explore the world in a similar way to how scientists do, asking questions such as “What if?” and “Why?” AI research aims to replicate these capabilities in machines. Deep learning in particular has brought about powerful tools for function approximation by means of end-to-end traininable deep neural networks. This capability has been corroborated by tremendous success in countless applications. However, their lack of interpretability and reasoning capabilities prove to be a hindrance towards building systems of human-like ability. Therefore, enabling causal reasoning capabilities in deep learning is of critical importance for research on the path towards human-level intelligence. First steps towards neural-causal models exist and promise a vision of AI systems that perform causal inferences as efficiently as modern-day neural models. Similarly, classical symbolic methods are being revisited and reintegrated into current systems to allow for reasoning capabilities beyond pure pattern recognition. The Pearlian formalization to causality has revealed a theoretically sound and practically strict hierarchy of reasoning that serves as a helpful benchmark for evaluating the reasoning capabilities of neuro-symbolic systems.

Our aim is …

Michael Poli · Winnie Xu · Estefany Kelly Buchanan · Maryam Hosseini · Luca Celotti · Martin Magill · Ermal Rrapaj · Qiyao Wei · Stefano Massaroli · Patrick Kidger · Archis Joglekar · Animesh Garg · David Duvenaud

[ Virtual ]

In recent years, there has been a rapid increase of machine learning applications in computational sciences, with some of the most impressive results at the interface of deep learning (DL) and differential equations (DEs). DL techniques have been used in a variety of ways to dramatically enhance the effectiveness of DE solvers and computer simulations. These successes have widespread implications, as DEs are among the most well-understood tools for the mathematical analysis of scientific knowledge, and they are fundamental building blocks for mathematical models in engineering, finance, and the natural sciences. Conversely, DL algorithms based on DEs--such as neural differential equations and continuous-time diffusion models--have also been successfully employed as deep learning models. Moreover, theoretical tools from DE analysis have been used to glean insights into the expressivity and training dynamics of mainstream deep learning algorithms.

This workshop will aim to bring together researchers with backgrounds in computational science and deep learning to encourage intellectual exchanges, cultivate relationships and accelerate research in this area. The scope of the workshop spans topics at the intersection of DL and DEs, including theory of DL and DEs, neural differential equations, solving DEs with neural networks, and more.

Elizabeth Wood · Adji Bousso Dieng · Aleksandrina Goeva · Alex X Lu · Anshul Kundaje · Chang Liu · Debora Marks · Ed Boyden · Eli N Weinstein · Lorin Crawford · Mor Nitzan · Rebecca Boiarsky · Romain Lopez · Tamara Broderick · Ray Jones · Wouter Boomsma · Yixin Wang · Stephen Ra

[ Virtual ]

All events will be in a non-NeurIPS Zoom and on Gather.Town, without embedded streaming. Links below.

Michael Muller · Plamen P Angelov · Hal Daumé III · Shion Guha · Q.Vera Liao · Nuria Oliver · David Piorkowski

[ Virtual ]

Andrey Kormilitzin · Dan Joyce · Nenad Tomasev · Kevin McKee

[ Virtual ]

Mental illness is the complex product of biological, psychological and social factors that foreground issues of under-representation, institutional and societal inequalities, bias and intersectionality in determining the outcomes for people affected by these disorders – the very same priorities that AI/ML fairness has begun to attend to in the past few years.

Despite the history of impoverished material investment in mental health globally, in the past decade, research practices in mental health have begun to embrace patient and citizen activism and the field has emphasised stakeholder (patients and public) participation as a central and absolutely necessary component of basic, translational and implementation science. This positions mental healthcare as something of an exemplar of participatory practices in healthcare from which technologists, engineers and scientists can learn.

The aim of the workshop is to address sociotechnical issues in healthcare AI/ML that are idiosyncratic to mental health.

Uniquely, this workshop will invite and bring together practitioners and researchers rarely found together “in the same room”, including:
- Under-represented groups with special interest in mental health and illness
- Clinical psychiatry, psychology and allied mental health professions
- Technologists, scientists and engineers from the machine learning communities

We will create an open, dialogue-focused exchange …

Huan Zhang · Linyi Li · Chaowei Xiao · J. Zico Kolter · Anima Anandkumar · Bo Li

[ Virtual ]

To address these negative societal impacts of ML, researchers have looked into different principles and constraints to ensure trustworthy and socially responsible machine learning systems. This workshop makes the first attempt towards bridging the gap between security, privacy, fairness, ethics, game theory, and machine learning communities and aims to discuss the principles and experiences of developing trustworthy and socially responsible machine learning systems. The workshop also focuses on how future researchers and practitioners should prepare themselves for reducing the risks of unintended behaviors of sophisticated ML models.

This workshop aims to bring together researchers interested in the emerging and interdisciplinary field of trustworthy and socially responsible machine learning from a broad range of disciplines with different perspectives to this problem. We attempt to highlight recent related work from different communities, clarify the foundations of trustworthy machine learning, and chart out important directions for future work and cross-community collaborations.

Manuela Veloso · John Dickerson · Senthil Kumar · Eren K. · Jian Tang · Jie Chen · Peter Henstock · Susan Tibbs · Ani Calinescu · Naftali Cohen · C. Bayan Bruss · Armineh Nourbakhsh

[ Virtual ]

Graph structures provide unique opportunities in representing complex systems that are challenging to model otherwise, due to a variety of complexities such as large number of entities, multiple entity types, different relationship types, and diverse patterns.

This provides unique opportunities in using graph and graph-based solutions within a wide array of industrial applications. In financial services,graph representations are used to model markets’ transactional systems and detect financial crime. In the healthcare field, knowledge graphs have gained traction as the best way of representing the interdisciplinary scientific knowledge across biology, chemistry, pharmacology, toxicology, and medicine. By mining scientific literature and combining it with various data sources, the knowledge graphs provide an up-to-date framework for both human and computer intelligence to generate new scientific hypotheses, drug strategies, and ideas.

In addition to the benefits of graph representation, graph native machine-learning solutions such as graph neural networks, convolutional networks, and others have been implemented effectively in many industrial systems. In finance, graph dynamics have been studied to capture emerging phenomena in volatile markets. In healthcare, these techniques have extended the traditional network analysis approaches to enable link prediction. A recent example was BenevolentAI’s knowledge graph prediction that a baricitinib (now in clinical trials), …

Alex Bewley · Roberto Calandra · Anca Dragan · Igor Gilitschenski · Emily Hannigan · Masha Itkina · Hamidreza Kasaei · Jens Kober · Danica Kragic · Nathan Lambert · Julien PEREZ · Fabio Ramos · Ransalu Senanayake · Jonathan Tompson · Vincent Vanhoucke · Markus Wulfmeier

[ Virtual ]

Machine learning (ML) has been one of the premier drivers of recent advances in robotics research and has made its way into impacting several real-world robotic applications in unstructured and human-centric environments, such as transportation, healthcare, and manufacturing. At the same time, robotics has been a key motivation for numerous research problems in artificial intelligence research, from efficient algorithms to robust generalization of decision models. However, there are still considerable obstacles to fully leveraging state-of-the-art ML in real-world robotics applications. For capable robots equipped with ML models, guarantees on the robustness and additional analysis of the social implications of these models are required for their utilization in real-world robotic domains that interface with humans (e.g. autonomous vehicles, and tele-operated or assistive robots).

To support the development of robots that are safely deployable among humans, the field must consider trustworthiness as a central aspect in the development of real-world robot learning systems. Unlike many other applications of ML, the combined complexity of physical robotic platforms and learning-based perception-action loops presents unique technical challenges. These challenges include concrete technical problems such as very high performance requirements, explainability, predictability, verification, uncertainty quantification, and robust operation in dynamically distributed, open-set domains. Since robots are …

Peetak Mitra · Maria João Sousa · Mark Roth · Jan Drgona · Emma Strubell · Yoshua Bengio

[ Virtual ]

The focus of this workshop is the use of machine learning to help address climate change, encompassing mitigation efforts (reducing greenhouse gas emissions), adaptation measures (preparing for unavoidable consequences), and climate science (our understanding of the climate and future climate predictions). Specifically, we aim to: (1) showcase high-impact applications of ML to climate change mitigation, adaptation, and climate science, (2) discuss related research directions to which the ML community can contribute, (3) brainstorm mechanisms to scale early academic research to successful, viable deployments, and (4) encourage fruitful collaboration between the ML community and a diverse set of researchers and practitioners from climate change-related fields. Building on our past workshops on this topic, this workshop particularly aims to explore the theme of climate change-informed metrics for AI, focusing both on (a) the domain-specific metrics by which AI systems should be evaluated when used as a tool for climate action, and (b) the climate change-related implications of using AI more broadly.

Dan Hendrycks · Victoria Krakovna · Dawn Song · Jacob Steinhardt · Nicholas Carlini

[ Virtual ]

Designing systems to operate safely in real-world settings is a topic of growing interest in machine learning. As ML becomes more capable and widespread, long-term and long-tail safety risks will grow in importance. To make the adoption of ML more beneficial, various aspects of safety engineering and oversight need to be proactively addressed by the research community. This workshop will bring together researchers from machine learning communities to focus on research topics in Robustness, Monitoring, Alignment, and Systemic Safety.
* Robustness is designing systems to be reliable in the face of adversaries and highly unusual situations.
* Monitoring is detecting anomalies, malicious use, and discovering unintended model functionality.
* Alignment is building models that represent and safely optimize difficult-to-specify human values.
* Systemic Safety is using ML to address broader risks related to how ML systems are handled, such as cyberattacks, facilitating cooperation, or improving the decision-making of public servants.

Tom White · Yingtao Tian · Lia Coleman · Samaneh Azadi

[ Virtual ]

Karol Hausman · Qi Zhang · Matthew Taylor · Martha White · Suraj Nair · Manan Tomar · Risto Vuorio · Ted Xiao · Zeyu Zheng · Manan Tomar

[ Virtual ]

In recent years, the use of deep neural networks as function approximators has enabled researchers to extend reinforcement learning techniques to solve increasingly complex control tasks. The emerging field of deep reinforcement learning has led to remarkable empirical results in rich and varied domains like robotics, strategy games, and multi-agent interactions. This workshop will bring together researchers working at the intersection of deep learning and reinforcement learning, and it will help interested researchers outside of the field gain a high-level view about the current state of the art and potential directions for future contributions.

Alex Hanna · Rida Qadri · Fernando Diaz · Nick Seaver · Morgan Scheuerman

[ Virtual ]

Panels 1b and 2b will be hosted in a separate zoom room,

Contributed Panel 1b: Frameworks of AI/Culture entanglement
Panel Zoom Link:
Password: fishvale

Contributed Panel 2b: Theorizing AI/Culture entanglement
Panel Zoom Link:
Password: fishvale