Skip to yearly menu bar Skip to main content


Workshop

Second Workshop on Efficient Natural Language and Speech Processing (ENLSP-II)

Mehdi Rezagholizadeh · Peyman Passban · Yue Dong · Lili Mou · Pascal Poupart · Ali Ghodsi · Qun Liu
Dec 2, 5:30 AM - 4:00 PM La Nouvelle Orleans Ballroom C (level 2)

The second version of the Efficient Natural Language and Speech Processing (ENLSP-II) workshop focuses on fundamental and challenging problems to make natural language and speech processing (especially pre-trained models) more efficient in terms of Data, Model, Training, and Inference. The workshop program offers an interactive platform for gathering different experts and talents from academia and industry through invited talks, panel discussion, paper submissions, reviews, interactive
posters, oral presentations and a mentorship program. This will be a unique opportunity to address the efficiency issues of current models, build connections, exchange ideas and brainstorm solutions, and foster future collaborations. The topics of this workshop can be of interest for people working on general machine learning, deep learning, optimization, theory and NLP & Speech applications.

Show more
View full details
Workshop

Progress and Challenges in Building Trustworthy Embodied AI

Chen Tang · Karen Leung · Leilani Gilpin · Jiachen Li · Changliu Liu
Dec 2, 5:50 AM - 3:00 PM Room 357

The recent advances in deep learning and artificial intelligence have equipped autonomous agents with increasing intelligence, which enables human-level performance in challenging tasks. In particular, these agents with advanced intelligence have shown great potential in interacting and collaborating with humans (e.g., self-driving cars, industrial robot co-worker, smart homes and domestic robots). However, the opaque nature of deep learning models makes it difficult to decipher the decision-making process of the agents, thus preventing stakeholders from readily trusting the autonomous agents, especially for safety-critical tasks requiring physical human interactions. In this workshop, we bring together experts with diverse and interdisciplinary backgrounds, to build a roadmap for developing and deploying trustworthy interactive autonomous systems at scale. Specifically, we aim to the following questions: 1) What properties are required for building trust between humans and interactive autonomous systems? How can we assess and ensure these properties without compromising the expressiveness of the models and performance of the overall systems? 2) How can we develop and deploy trustworthy autonomous agents under an efficient and trustful workflow? How should we transfer from development to deployment? 3) How to define standard metrics to quantify trustworthiness, from regulatory, theoretical, and experimental perspectives? How do we know that the trustworthiness metrics can scale to the broader population? 4) What are the most pressing aspects and open questions for the development of trustworthy autonomous agents interacting with humans? Which research areas are prime for research in academia and which are better suited for industry research?

Show more
View full details
Workshop

Synthetic Data for Empowering ML Research

Mihaela van der Schaar · Zhaozhi Qian · Sergul Aydore · Dimitris Vlitas · Dino Oglic · Tucker Balch
Dec 2, 6:00 AM - 3:00 PM Room 288 - 289

Advances in machine learning owe much to the public availability of high-quality benchmark datasets and the well-defined problem settings that they encapsulate. Examples are abundant: CIFAR-10 for image classification, COCO for object detection, SQuAD for question answering, BookCorpus for language modelling, etc. There is a general belief that the accessibility of high-quality benchmark datasets is central to the thriving of our community.

However, three prominent issues affect benchmark datasets: data scarcity, privacy, and bias. They already manifest in many existing benchmarks, and also make the curation and publication of new benchmarks difficult (if not impossible) in numerous high-stakes domains, including healthcare, finance, and education. Hence, although ML holds strong promise in these domains, the lack of high-quality benchmark datasets creates a significant hurdle for the development of methodology and algorithms and leads to missed opportunities.

Synthetic data is a promising solution to the key issues of benchmark dataset curation and publication. Specifically, high-quality synthetic data generation could be done while addressing the following major issues.

1. Data Scarcity. The training and evaluation of ML algorithms require datasets with a sufficient sample size. Note that even if the algorithm can learn from very few samples, we still need sufficient validation data for model evaluation. However, it is often challenging to obtain the desired number of samples due to the inherent data scarcity (e.g. people with unique characteristics, patients with rare diseases etc.) or the cost and feasibility of certain data collection. There has been very active research in cross-domain and out-of-domain data generation, as well as generation from a few samples. Once the generator is trained, one could obtain arbitrarily large synthetic datasets.

2. Privacy. In many key applications, ML algorithms rely on record-level data collected from human subjects, which leads to privacy concerns and legal risks. As a result, data owners are often hesitant to publish datasets for the research community. Even if they are willing to, accessing the datasets often requires significant time and effort from the researchers. Synthetic data is regarded as one potential way to promote privacy. The 2019 NeurIPS Competition "Synthetic data hide and seek challenge" demonstrates the difficulty in performing privacy attacks on synthetic data. Many recent works look further into the theoretical and practical aspects of synthetic data and privacy.

3. Bias and under-representation. The benchmark dataset may be subject to data collection bias and under-represent certain groups (e.g. people with less-privileged access to technology). Using these datasets as benchmarks would (implicitly) encourage the community to build algorithms that reflect or even exploit the existing bias. This is likely to hamper the adoption of ML in high-stake applications that require fairness, such as finance and justice. Synthetic data provides a way to curate less biased benchmark data. Specifically, (conditional) generative models can be used to augment any under-represented group in the original dataset. Recent works have shown that training on synthetically augmented data leads to consistent improvements in robustness and generalisation.

Why do we need this workshop? Despite the growing interest in using synthetic data to empower ML, this agenda is still challenging because it involves multiple research fields and various industry stakeholders. Specifically, it calls for the collaboration of the researchers in generative models, privacy, and fairness. Existing research in generative models focuses on generating high-fidelity data, often neglecting the privacy and fairness aspect. On the other hand, the existing research in privacy and fairness often focuses on the discriminative setting rather than the generative setting. Finally, while generative modelling in images and tabular data has matured, the generation of time series and multi-modal data is still a vibrant area of research, especially in complex domains in healthcare and finance. The data modality and characteristics differ significantly across application domains and industries. It is therefore important to get the inputs from the industry experts such that the benchmark reflects reality.

The goal of this workshop is to provide a platform for vigorous discussion with researchers in various fields of ML and industry experts in the hope to progress the idea of using synthetic data to empower ML research. The workshop also provides a forum for constructive debates and identifications of strengths and weaknesses with respect to alternative approaches, e.g. federated learning

Show more
View full details
Workshop

AI for Science: Progress and Promises

Yi Ding · Yuanqi Du · Tianfan Fu · Hanchen Wang · Anima Anandkumar · Yoshua Bengio · Anthony Gitter · Carla Gomes · Aviv Regev · Max Welling · Marinka Zitnik
Dec 2, 6:00 AM - 4:00 PM Room 388 - 390
View full details
Workshop

AI for Accelerated Materials Design (AI4Mat)

Santiago Miret · Marta Skreta · Zamyla Morgan-Chan · Benjamin Sanchez-Lengeling · Shyue Ping Ong · Alan Aspuru-Guzik
Dec 2, 6:00 AM - 3:00 PM Room 386

Self-Driving Materials Laboratories have greatly advanced the automation of material design and discovery. They require the integration of diverse fields and consist of three primary components, which intersect with many AI-related research topics:

- AI-Guided Design. This component intersects heavily with algorithmic research at NeurIPS, including (but not limited to) various topic areas such as: Reinforcement Learning and data-driven modeling of physical phenomena using Neural Networks (e.g. Graph Neural Networks and Machine Learning For Physics).

- Automated Chemical Synthesis. This component intersects significantly with robotics research represented at NeurIPS, and includes several parts of real-world robotic systems such as: managing control systems (e.g. Reinforcement Learning) and different sensor modalities (e.g. Computer Vision), as well as predictive models for various phenomena (e.g. Data-Based Prediction of Chemical Reactions).

- Automated Material Characterization. This component intersects heavily with a diverse set of supervised learning techniques that are well-represented at NeurIPS such as: computer vision for microscopy images and automated machine learning based analysis of data generated from different kinds of instruments (e.g. X-Ray based diffraction data for determining material structure).

Show more
View full details
Workshop

Order up! The Benefits of Higher-Order Optimization in Machine Learning

Albert Berahas · Jelena Diakonikolas · Jarad Forristal · Brandon Reese · Martin Takac · Yan Xu
Dec 2, 6:15 AM - 3:00 PM Room 275 - 277

Optimization is a cornerstone of nearly all modern machine learning (ML) and deep learning (DL). Simple first-order gradient-based methods dominate the field for convincing reasons: low computational cost, simplicity of implementation, and strong empirical results.

Yet second- or higher-order methods are rarely used in DL, despite also having many strengths: faster per-iteration convergence, frequent explicit regularization on step-size, and better parallelization than SGD. Additionally, many scientific fields use second-order optimization with great success.

A driving factor for this is the large difference in development effort. By the time higher-order methods were tractable for DL, first-order methods such as SGD and it’s main varients (SGD + Momentum, Adam, …) already had many years of maturity and mass adoption.

The purpose of this workshop is to address this gap, to create an environment where higher-order methods are fairly considered and compared against one-another, and to foster healthy discussion with the end goal of mainstream acceptance of higher-order methods in ML and DL.

Show more
View full details
Workshop

3rd Offline Reinforcement Learning Workshop: Offline RL as a "Launchpad"

Aviral Kumar · Rishabh Agarwal · Aravind Rajeswaran · Wenxuan Zhou · George Tucker · Doina Precup · Aviral Kumar
Dec 2, 6:20 AM - 3:30 PM Room 291 - 292

While offline RL focuses on learning solely from fixed datasets, one of the main learning points from the previous edition of offline RL workshop was that large-scale RL applications typically want to use offline RL as part of a bigger system as opposed to being the end-goal in itself. Thus, we propose to shift the focus from algorithm design and offline RL applications to how offline RL can be a launchpad , i.e., a tool or a starting point, for solving challenges in sequential decision-making such as exploration, generalization, transfer, safety, and adaptation. Particularly, we are interested in studying and discussing methods for learning expressive models, policies, skills and value functions from data that can help us make progress towards efficiently tackling these challenges, which are otherwise often intractable.


Submission site: https://openreview.net/group?id=NeurIPS.cc/2022/Workshop/Offline_RL. The submission deadline is September 25, 2022 (Anywhere on Earth). Please refer to the submission page for more details.

Show more
View full details
Workshop

Has it Trained Yet? A Workshop for Algorithmic Efficiency in Practical Neural Network Training

Frank Schneider · Zachary Nado · Philipp Hennig · George Dahl · Naman Agarwal
Dec 2, 6:30 AM - 3:00 PM Theater B

Workshop Description

Training contemporary neural networks is a lengthy and often costly process, both in human designer time and compute resources. Although the field has invented numerous approaches, neural network training still usually involves an inconvenient amount of “babysitting” to get the model to train properly. This not only requires enormous compute resources but also makes deep learning less accessible to outsiders and newcomers. This workshop will be centered around the question “How can we train neural networks faster” by focusing on the effects algorithms (not hardware or software developments) have on the training time of neural networks. These algorithmic improvements can come in the form of novel methods, e.g. new optimizers or more efficient data selection strategies, or through empirical experience, e.g. best practices for quickly identifying well-working hyperparameter settings or informative metrics to monitor during training.

We all think we know how to train deep neural networks, but we all seem to have different ideas. Ask any deep learning practitioner about the best practices of neural network training, and you will often hear a collection of arcane recipes. Frustratingly, these hacks vary wildly between companies and teams. This workshop offers a platform to talk about these ideas, agree on what is actually known, and what is just noise. In this sense, this will not be an “optimization workshop” in the mathematical sense (of which there have been several in the past, of course).

To this end, the workshop’s goal is to connect two communities: Researchers who develop new algorithms for faster neural network training, such as new optimization methods or deep learning architectures. Practitioners who, through their work on real-world problems, are increasingly relying on “tricks of the trade”. The workshop aims to close the gap between research and applications, identifying the most relevant current issues that hinder faster neural network training in practice.

Topics

Among the topics addressed by the workshop are:

- What “best practices” for faster neural network training are used in practice and can we learn from them to build better algorithms?
- What are painful lessons learned while training deep learning models?
- What are the most needed algorithmic improvements for neural network training?
- How can we ensure that research on training methods for deep learning has practical relevance?

Important Dates

- Submission Deadline: September 30, 2022, 07:00am UTC (updated!)
- Accept/Reject Notification Date: October 20, 2022, 07:00am UTC (updated!)
- Workshop Date: December 2, 2022

Show more
View full details
Workshop

Causal Machine Learning for Real-World Impact

Nick Pawlowski · Jeroen Berrevoets · Caroline Uhler · Kun Zhang · Mihaela van der Schaar · Cheng Zhang
Dec 2, 6:30 AM - 3:00 PM Room 295 - 296

Causality has a long history, providing it with many principled approaches to identify a causal effect (or even distill cause from effect). However, these approaches are often restricted to very specific situations, requiring very specific assumptions. This contrasts heavily with recent advances in machine learning. Real-world problems aren’t granted the luxury of making strict assumptions, yet still require causal thinking to solve. Armed with the rigor of causality, and the can-do-attitude of machine learning, we believe the time is ripe to start working towards solving real-world problems.

Show more
View full details
Workshop

Human in the Loop Learning (HiLL) Workshop at NeurIPS 2022

Shanghang Zhang · Hao Dong · Wei Pan · Pradeep Ravikumar · Vittorio Ferrari · Fisher Yu · Xin Wang · Zihan Ding
Dec 2, 6:30 AM - 3:00 PM Room 396

Recent years have witnessed the rising need for machine learning systems that can interact with humans in the learning loop. Such systems can be applied to computer vision, natural language processing, robotics, and human-computer interaction. Creating and running such systems call for interdisciplinary research of artificial intelligence, machine learning, and software engineering design, which we abstract as Human in the Loop Learning (HiLL).

The HiLL workshop aims to bring together researchers and practitioners working on the broad areas of HiLL, ranging from interactive/active learning algorithms for real-world decision-making systems (e.g., autonomous driving vehicles, robotic systems, etc.), human-inspired learning that mitigates the gap between human intelligence and machine intelligence, human-machine collaborative learning that creates a more powerful learning system, lifelong learning that transfers knowledge to learn new tasks over a lifetime, as well as interactive system designs (e.g., data visualization, annotation systems, etc.).

The HiLL workshop continues the previous effort to provide a platform for researchers from interdisciplinary areas to share their recent research. In this year’s workshop, a special feature is to encourage the discussion on the interactive and collaborative learning between human and machine learning agents: Can they be organically combined to create a more powerful learning system? We believe the theme of the workshop will be of interest to broad NeurIPS attendees, especially those who are interested in interdisciplinary study.

Show more
View full details
Workshop

Federated Learning: Recent Advances and New Challenges

Shiqiang Wang · Nathalie Baracaldo · Olivia Choudhury · Gauri Joshi · Peter Richtarik · Praneeth Vepakomma · Han Yu
Dec 2, 6:30 AM - 3:00 PM Room 298 - 299

Training machine learning models in a centralized fashion often faces significant challenges due to regulatory and privacy concerns in real-world use cases. These include distributed training data, computational resources to create and maintain a central data repository, and regulatory guidelines (GDPR, HIPAA) that restrict sharing sensitive data. Federated learning (FL) is a new paradigm in machine learning that can mitigate these challenges by training a global model using distributed data, without the need for data sharing. The extensive application of machine learning to analyze and draw insight from real-world, distributed, and sensitive data necessitates familiarization with and adoption of this relevant and timely topic among the scientific community.

Despite the advantages of FL, and its successful application in certain industry-based cases, this field is still in its infancy due to new challenges that are imposed by limited visibility of the training data, potential lack of trust among participants training a single model, potential privacy inferences, and in some cases, limited or unreliable connectivity.

The goal of this workshop is to bring together researchers and practitioners interested in FL. This day-long event will facilitate interaction among students, scholars, and industry professionals from around the world to understand the topic, identify technical challenges, and discuss potential solutions. This will lead to an overall advancement of FL and its impact in the community, while noting that FL has become an increasingly popular topic in the machine learning community in recent years.

Show more
View full details
Workshop

Table Representation Learning

Madelon Hulsebos · Bojan Karlaš · Pengcheng Yin · haoyu dong
Dec 2, 6:30 AM - 3:45 PM Room 398

We develop large models to “understand” images, videos and natural language that fuel many intelligent applications from text completion to self-driving cars. But tabular data has long been overlooked despite its dominant presence in data-intensive systems. By learning latent representations from (semi-)structured tabular data, pretrained table models have shown preliminary but impressive performance for semantic parsing, question answering, table understanding, and data preparation. Considering that such tasks share fundamental properties inherent to tables, representation learning for tabular data is an important direction to explore further. These works also surfaced many open challenges such as finding effective data encodings, pretraining objectives and downstream tasks.

Key questions that we aim to address in this workshop are:
- How should tabular data be encoded to make learned Table Models generalize across tasks?
- Which pre-training objectives, architectures, fine-tuning and prompting strategies, work for tabular data?
- How should the varying formats, data types, and sizes of tables be handled?
- To what extend can Language Models be adapted towards tabular data tasks and what are their limits?
- What tasks can existing Table Models accomplish well and what opportunities lie ahead?
- How do existing Table Models perform, what do they learn, where and how do they fall short?
- When and how should Table Models be updated in contexts where the underlying data source continuously evolves?

The First Table Representation Learning workshop is the first workshop in this emerging research area and is centered around three main goals:
1) Motivate tabular data as primal modality for representation learning and further shaping this area.
2) Showcase impactful applications of pretrained table models and discussing future opportunities thereof.
3) Foster discussion and collaboration across the machine learning, natural language processing, and data management communities.

Speakers
Alon Halevy (keynote), Meta AI
Graham Neubig (keynote), Carnegie Mellon University
Carsten Binnig, TU Darmstadt
Çağatay Demiralp, Sigma Computing
Huan Sun, Ohio State University
Xinyun Chen, Google Brain

Panelists
TBA

Scope
We invite submissions that address, but are not limited to, any of the following topics on machine learning for tabular data:
Representation Learning Representation learning techniques for structured (e.g., relational databases) or semi-structured (Web tables, spreadsheet tables) tabular data and interfaces to it. This includes developing specialized data encodings or adaptation of general-purpose ones (e.g., GPT-3) for tabular data, multimodal learning across tables, and other modalities (e.g., natural language, images, code), and relevant fine-tuning and prompting strategies.
Downstream Applications Machine learning applications involving tabular data, such as data preparation (e.g. data cleaning, integration, cataloging, anomaly detection), retrieval (e.g., semantic parsing, question answering, fact-checking), information extraction, and generation (e.g., table-to-text).
Upstream Applications Applications that use representation learning to optimize tabular data processing systems, such as table parsers (extracting tables from documents, spreadsheets, presentations, images), storage (e.g. compression, indexing), and querying (e.g. query plan optimization, cost estimation).
Industry Papers Applications of tabular representation models in production. Challenges of maintaining and managing table representation models in a fast evolving context, e.g. data updating, error correction, monitoring.
New Resources Survey papers, analyses, benchmarks and datasets for tabular representation models and their applications, visions and reflections to structure and guide future research.

Important dates
Submission open: 20 August 2022
Submission deadline: 26 September 2022
Notifications: 20 October 2022
Camera-ready, slides and recording upload: 3 November 2022
Workshop: 2 December 2022

Submission formats
Abstract: 1 page + references.
Extended abstract: at most 4 pages + references.
Regular paper: at least 6 pages + references.

Questions:
table-representation-learning-workshop@googlegroups.com (public)
m.hulsebos@uva.nl (private)

Show more
View full details
Workshop

INTERPOLATE — First Workshop on Interpolation Regularizers and Beyond

Yann Dauphin · David Lopez-Paz · Vikas Verma · Boyi Li
Dec 2, 6:30 AM - 4:00 PM Room 393

Goals

Interpolation regularizers are an increasingly popular approach to regularize deep models. For example, the mixup data augmentation method constructs synthetic examples by linearly interpolating random pairs of training data points. During their half-decade lifespan, interpolation regularizers have become ubiquitous and fuel state-of-the-art results in virtually all domains, including computer vision and medical diagnosis. This workshop brings together researchers and users of interpolation regularizers to foster research and discussion to advance and understand interpolation regularizers. This inaugural meeting will have no shortage of interactions and energy to achieve these exciting goals. Suggested topics include, but are not limited to the intersection between interpolation regularizers and:

* Domain generalization
* Semi-supervised learning
* Privacy-preserving ML
* Theory
* Robustness
* Fairness
* Vision
* NLP
* Medical applications

## Important dates

* Paper submission deadline: September 22, 2022
* Paper acceptance notification: October 14, 2022
* Workshop: December 2, 2022

## Call for papers

Authors are invited to submit short papers with up to 4 pages, but unlimited number of pages for references and supplementary materials. The submissions must be anonymized as the reviewing process will be double-blind. Please use the NeurIPS template for submissions. We welcome submissions that have been already published during COVID in order to foster discussion. The venue of publication should be clearly indicated during submission for such papers. Submission Link: https://openreview.net/group?id=NeurIPS.cc/2022/Workshop/INTERPOLATE

## Invited Speakers

Chelsea Finn, form Stanford, on "Repurposing Mixup for Robustness and Regression"
Sanjeev Arora, from Princeton, on "Using Interpolation Ideas to provide privacy in Federated Learning settings"
Kenji Kawaguchi, from NUS, on "The developments of the theory of Mixup"
Youssef Mroueh, from IBM, on "Fairness and mixing"
Alex Lamb, from MSR, on "What matters in the world? Exploring algorithms for provably ignoring irrelevant details"

Show more
View full details
Workshop

Memory in Artificial and Real Intelligence (MemARI)

Mariya Toneva · Javier Turek · Vy Vo · Shailee Jain · Kenneth Norman · Alexander Huth · Uri Hasson · Mihai Capotă
Dec 2, 6:30 AM - 3:00 PM Room 397

One of the key challenges for AI is to understand, predict, and model data over time. Pretrained networks should be able to temporally generalize, or adapt to shifts in data distributions that occur over time. Our current state-of-the-art (SOTA) still struggles to model and understand data over long temporal durations – for example, SOTA models are limited to processing several seconds of video, and powerful transformer models are still fundamentally limited by their attention spans. On the other hand, humans and other biological systems are able to flexibly store and update information in memory to comprehend and manipulate multimodal streams of input. Cognitive neuroscientists propose that they do so via the interaction of multiple memory systems with different neural mechanisms. What types of memory systems and mechanisms already exist in our current AI models? First, there are extensions of the classic proposal that memories are formed via synaptic plasticity mechanisms – information can be stored in the static weights of a pre-trained network, or in fast weights that more closely resemble short-term plasticity mechanisms. Then there are persistent memory states, such as those in LSTMs or in external differentiable memory banks, which store information as neural activations that can change over time. Finally, there are models augmented with static databases of knowledge, akin to a high-precision long-term memory or semantic memory in humans. When is it useful to store information in each one of these mechanisms, and how should models retrieve from them or modify the information therein? How should we design models that may combine multiple memory mechanisms to address a problem? Furthermore, do the shortcomings of current models require some novel memory systems that retain information over different timescales, or with different capacity or precision? Finally, what can we learn from memory processes in biological systems that may advance our models in AI? We aim to explore how a deeper understanding of memory mechanisms can improve task performance in many different application domains, such as lifelong / continual learning, reinforcement learning, computer vision, and natural language processing.

Show more
View full details
Workshop

LaReL: Language and Reinforcement Learning

Laetitia Teodorescu · Laura Ruis · Tristan Karch · Cédric Colas · Paul Barde · Jelena Luketina · Athul Jacob · Pratyusha Sharma · Edward Grefenstette · Jacob Andreas · Marc-Alexandre Côté
Dec 2, 6:30 AM - 3:00 PM Room 391

Language is one of the most impressive human accomplishments and is believed to be the core to our ability to learn, teach, reason and interact with others. Learning many complex tasks or skills would be significantly more challenging without relying on language to communicate, and language is believed to have a structuring impact on human thought. Written language has also given humans the ability to store information and insights about the world and pass it across generations and continents. Yet, the ability of current state-of-the art reinforcement learning agents to understand natural language is limited.

Practically speaking, the ability to integrate and learn from language, in addition to rewards and demonstrations, has the potential to improve the generalization, scope and sample efficiency of agents. For example, agents that are capable of transferring domain knowledge from textual corpora might be able to much more efficiently explore in a given environment or to perform zero or few shot learning in novel environments. Furthermore, many real-world tasks, including personal assistants and general household robots, require agents to process language by design, whether to enable interaction with humans, or simply use existing interfaces.

To support this field of research, we are interested in fostering the discussion around:

- Methods that can effectively link language to actions and observations in the environment;
- Research into language roles beyond encoding goal states, such as structuring hierarchical policies,
- Communicating domain knowledge or reward shaping;
- Methods that can help identify and incorporate outside textual information about the task, or general-purpose semantics learned from outside corpora;
- Novel environments and benchmarks enabling such research and approaching complexity of real-world problem settings.

The aim of the workshop on Language in Reinforcement Learning (LaReL) is to steer discussion and research of these problems by bringing together researchers from several communities, including reinforcement learning, robotics, natural language processing, computer vision and cognitive psychology.

Show more
View full details
Workshop

New Frontiers in Graph Learning

Jiaxuan You · Marinka Zitnik · Rex Ying · Yizhou Sun · Hanjun Dai · Stefanie Jegelka
Dec 2, 6:40 AM - 3:00 PM Theater A

Background. In recent years, graph learning has quickly grown into an established sub-field of machine learning. Researchers have been focusing on developing novel model architectures, theoretical understandings, scalable algorithms and systems, and successful applications across industry and science regarding graph learning. In fact, more than 5000 research papers related to graph learning have been published over the past year alone.

Challenges. Despite the success, existing graph learning paradigms have not captured the full spectrum of relationships in the physical and the virtual worlds. For example, in terms of applicability of graph learning algorithms, current graph learning paradigms are often restricted to datasets with explicit graph representations, whereas recent works have shown promise of graph learning methods for applications without explicit graph representations. In terms of usability, while popular graph learning libraries greatly facilitate the implementation of graph learning techniques, finding the right graph representation and model architecture for a given use case still requires heavy expert knowledge. Furthermore, in terms of generalizability, unlike domains such as computer vision and natural language processing where large-scale pre-trained models generalize across downstream applications with little to no fine-tuning and demonstrate impressive performance, such a paradigm has yet to succeed in the graph learning domain.

Goal. The primary goal of this workshop is to expand the impact of graph learning beyond the current boundaries. We believe that graph, or relation data, is a universal language that can be used to describe the complex world. Ultimately, we hope graph learning will become a generic tool for learning and understanding any type of (structured) data. We aim to present and discuss the new frontiers in graph learning with researchers and practitioners within and outside the graph learning community. New understandings of the current challenges, new perspectives regarding the future directions, and new solutions and applications as proof of concepts are highly welcomed.

Scope and Topics. We welcome submissions regarding the new frontiers of graph learning, including but not limited to:
- Graphs in the wild: Graph learning for datasets and applications without explicit relational structure (e.g., images, text, audios, code). Novel ways of modeling structured/unstructured data as graphs are highly welcomed.
- Graphs in ML: Graph representations in general machine learning problems (e.g., neural architectures as graphs, relations among input data and learning tasks, graphs in large language models, etc.)
- New oasis: Graph learning methods that are significantly different from the current paradigms (e.g., large-scale pre-trained models, multi-task models, super scalable algorithms, etc.)
- New capabilities: Graph representation for knowledge discovery, optimization, causal inference, explainable ML, ML fairness, etc.
- Novel applications: Novel applications of graph learning in real-world industry and scientific domains. (e.g., graph learning for missing data imputation, program synthesis, etc.)

Call for papers

Submission deadline: Thursday, Sept 22, 2022 (16:59 PDT)

Submission site (OpenReview): NeurIPS 2022 GLFrontiers Workshop

Author notification: Thursday, Oct 6, 2022

Camera ready deadline: Thursday, Oct 27, 2022 (16:59 PDT)

Workshop (in person): Friday, Dec 2, 2022

The workshop will be held fully in person at the New Orleans Convention Center, as part of the NeurIPS 2022 conference. We also plan to offer livestream for the event, and more details will come soon.

We welcome both short research papers of up to 4 pages (excluding references and supplementary materials), and full-length research papers of up to 8 pages (excluding references and supplementary materials). All accepted papers will be presented as posters. We plan to select around 6 papers for oral presentations and 2 papers for the outstanding paper awards with potential cash incentives.

All submissions must use the NeurIPS template. We do not require the authors to include the checklist in the template. Submissions should be in .pdf format, and the review process is double-blind—therefore the papers should be appropriately anonymized. Previously published work (or under-review) is acceptable.

Should you have any questions, please reach out to us via email:
glfrontiers@googlegroups.com

Show more
View full details
Workshop

Shared Visual Representations in Human and Machine Intelligence (SVRHM)

Arturo Deza · Joshua Peterson · N Apurva Ratan Murty · Tom Griffiths
Dec 2, 6:45 AM - 4:00 PM Room 394-395
View full details
Workshop

NeurIPS 2022 Workshop on Score-Based Methods

Yingzhen Li · Yang Song · Valentin De Bortoli · Francois-Xavier Briol · Wenbo Gong · Alexia Jolicoeur-Martineau · Arash Vahdat
Dec 2, 6:50 AM - 3:00 PM Room 293 - 294

The score function, which is the gradient of the log-density, provides a unique way to represent probability distributions. By working with distributions through score functions, researchers have been able to develop efficient tools for machine learning and statistics, collectively known as score-based methods.

Score-based methods have had a significant impact on vastly disjointed subfields of machine learning and statistics, such as generative modeling, Bayesian inference, hypothesis testing, control variates and Stein’s methods. For example, score-based generative models, or denoising diffusion models, have emerged as the state-of-the-art technique for generating high quality and diverse images. In addition, recent developments in Stein’s method and score-based approaches for stochastic differential equations (SDEs) have contributed to the developement of fast and robust Bayesian posterior inference in high dimensions. These have potential applications in engineering fields, where they could help improve simulation models.

At our workshop, we will bring together researchers from these various subfields to discuss the success of score-based methods, and identify common challenges across different research areas. We will also explore the potential for applying score-based methods to even more real-world applications, including in computer vision, signal processing, and computational chemistry. By doing so, we hope to folster collaboration among researchers and build a more cohesive research community focused on score-based methods.

Show more
View full details
Workshop

Medical Imaging meets NeurIPS

DOU QI · Konstantinos Kamnitsas · Yuankai Huo · Xiaoxiao Li · Daniel Moyer · Danielle Pace · Jonas Teuwen · Islem Rekik
Dec 2, 6:55 AM - 3:00 PM Room 283 - 285

'Medical Imaging meets NeurIPS' is a satellite workshop established in 2017. The workshop aims to bring researchers together from the medical image computing and machine learning communities. The objective is to discuss the major challenges in the field and opportunities for joining forces. This year the workshop will feature online oral and poster sessions with an emphasis on audience interactions. In addition, there will be a series of high-profile invited speakers from industry, academia, engineering and medical sciences giving an overview of recent advances, challenges, latest technology and efforts for sharing clinical data.

Show more
View full details
Workshop

Learning from Time Series for Health

Sana Tonekaboni · Tom Hartvigsen · Satya Narayan Shukla · Gunnar Rätsch · Marzyeh Ghassemi · Anna Goldenberg
Dec 2, 7:00 AM - 3:00 PM Room 392

Time series data are ubiquitous in healthcare, from medical time series to wearable data, and present an exciting opportunity for machine learning methods to extract actionable insights about human health. However, huge gap remain between the existing time series literature and what is needed to make machine learning systems practical and deployable for healthcare. This is because learning from time series for health is notoriously challenging: labels are often noisy or missing, data can be multimodal and extremely high dimensional, missing values are pervasive, measurements are irregular, data distributions shift rapidly over time, explaining model outcomes is challenging, and deployed models require careful maintenance over time. These challenges introduce interesting research problems that the community has been actively working on for the last few years, with significant room for contribution still remaining. Learning from time series for health is a uniquely challenging and important area with increasing application. Significant advancements are required to realize the societal benefits of these systems for healthcare. This workshop will bring together machine learning researchers dedicated to advancing the field of time series modeling in healthcare to bring these models closer to deployment.

Show more
View full details
Workshop

Robustness in Sequence Modeling

Nathan Ng · Haoran Zhang · Vinith Suriyakumar · Chantal Shaib · Kyunghyun Cho · Sharon Li · Alice Oh · Marzyeh Ghassemi
Dec 2, 7:00 AM - 3:00 PM Room 290

As machine learning models find increasing use in the real world, ensuring their safe and reliable deployment depends on ensuring their robustness to distribution shift. This is especially true for sequential data, which occurs naturally in various data domains such as natural language processing, healthcare, computational biology, and finance. However, building models for sequence data which are robust to distribution shifts presents a unique challenge. Sequential data are often discrete rather than continuous, exhibit difficult to characterize distributions, and can display a much greater range of types of distributional shifts. Although many methods for improving model robustness exist for imaging or tabular data, extending these methods to sequential data is a challenging research direction that often requires fundamentally different techniques.

This workshop aims to facilitate progress towards improving the distributional robustness of models trained on sequential data by bringing together researchers to tackle a wide variety of research questions including, but not limited to:
(1) How well do existing robustness methods work on sequential data, and why do they succeed or fail?
(2) How can we leverage the sequential nature of the data to develop novel and distributionally robust methods?
(3) How do we construct and utilize formalisms for distribution shifts in sequential data?

We hope that this workshop provides a first step towards improving the robustness, and ultimately safety and reliability, of models in sequential data domains.

Show more
View full details
Workshop

All Things Attention: Bridging Different Perspectives on Attention

Abhijat Biswas · Akanksha Saran · Khimya Khetarpal · Reuben Aronson · Ruohan Zhang · Grace Lindsay · Scott Niekum
Dec 2, 7:00 AM - 4:00 PM Room 399

Attention is a widely popular topic studied in many fields such as neuroscience, psychology, and machine learning. A better understanding and conceptualization of attention in both humans and machines has led to significant progress across fields. At the same time, attention is far from a clear or unified concept, with many definitions within and across multiple fields.

Cognitive scientists study how the brain flexibly controls its limited computational resources to accomplish its objectives. Inspired by cognitive attention, machine learning researchers introduce attention as an inductive bias in their models to improve performance or interpretability. Human-computer interaction designers monitor people’s attention during interactions to implicitly detect aspects of their mental states.

While the aforementioned research areas all consider attention, each formalizes and operationalizes it in different ways. Bridging this gap will facilitate:
- (Cogsci for AI) More principled forms of attention in AI agents towards more human-like abilities such as robust generalization, quicker learning and faster planning.
- (AI for cogsci) Developing better computational models for modeling human behaviors that involve attention.
- (HCI) Modeling attention during interactions from implicit signals for fluent and efficient coordination
- (HCI/ML) Artificial models of algorithmic attention to enable intuitive interpretations of deep models?

Show more
View full details
Workshop

NeurIPS 2022 Workshop on Meta-Learning

Huaxiu Yao · Eleni Triantafillou · Fabio Ferreira · Joaquin Vanschoren · Qi Lei
Dec 2, 7:00 AM - 4:00 PM Theater C

Recent years have seen rapid progress in meta-learning methods, which transfer knowledge across tasks and domains to efficiently learn new tasks, optimize the learning process itself, and even generate new learning methods from scratch. Meta-learning can be seen as the logical conclusion of the arc that machine learning has undergone in the last decade, from learning classifiers, to learning representations, and finally to learning algorithms that themselves acquire representations, classifiers, and policies for acting in environments. In practice, meta-learning has been shown to yield new state-of-the-art automated machine learning methods, novel deep learning architectures, and substantially improved one-shot learning systems. Moreover, improving one’s own learning capabilities through experience can also be viewed as a hallmark of intelligent beings, and neuroscience shows a strong connection between human and reward learning and the growing sub-field of meta-reinforcement learning.

Some of the fundamental questions that this workshop aims to address are:
- What are the meta-learning processes in nature (e.g., in humans), and how can we take inspiration from them?
- What is the relationship between meta-learning, continual learning, and transfer learning?
- What interactions exist between meta-learning and large pretrained / foundation models?
- What principles can we learn from meta-learning to help us design the next generation of learning systems?
- What kind of theoretical principles can we develop for meta-learning?
- How can we exploit our domain knowledge to effectively guide the meta-learning process and make it more efficient?
- How can we design better benchmarks for different meta-learning scenarios?

As prospective participants, we primarily target machine learning researchers interested in the questions and foci outlined above. Specific target communities within machine learning include, but are not limited to: meta-learning, AutoML, reinforcement learning, deep learning, optimization, evolutionary computation, and Bayesian optimization. We also invite submissions from researchers who study human learning and neuroscience, to provide a broad and interdisciplinary perspective to the attendees.

Show more
View full details
Workshop

Gaussian Processes, Spatiotemporal Modeling, and Decision-making Systems

Alexander Terenin · Elizaveta Semenova · Geoff Pleiss · Zi Wang
Dec 2, 7:00 AM - 4:00 PM Room 387

In recent years, the growth of decision-making applications, where principled handling of uncertainty is of key concern, has led to increased interest in Bayesian techniques. By offering the capacity to assess and propagate uncertainty in a principled manner, Gaussian processes have become a key technique in areas such as Bayesian optimization, active learning, and probabilistic modeling of dynamical systems. In parallel, the need for uncertainty-aware modeling of quantities that vary over space and time has led to large-scale deployment of Gaussian processes, particularly in application areas such as epidemiology. In this workshop, we bring together researchers from different communities to share ideas and success stories. By showcasing key applied challenges, along with recent theoretical advances, we hope to foster connections and prompt fruitful discussion. We invite researchers to submit extended abstracts for contributed talks and posters.

Show more
View full details
Workshop

Algorithmic Fairness through the Lens of Causality and Privacy

Awa Dieng · Miriam Rateike · Golnoosh Farnadi · Ferdinando Fioretto · Matt Kusner · Jessica Schrouff
Dec 3, 5:30 AM - 2:55 PM Room 392

As machine learning models permeate every aspect of decision making systems in consequential areas such as healthcare and criminal justice, it has become critical for these models to satisfy trustworthiness desiderata such as fairness, interpretability, accountability, privacy and security. Initially studied in isolation, recent work has emerged at the intersection of these different fields of research, leading to interesting questions on how fairness can be achieved using a causal perspective and under privacy concerns.

Indeed, the field of causal fairness has seen a large expansion in recent years notably as a way to counteract the limitations of initial statistical definitions of fairness. While a causal framing provides flexibility in modelling and mitigating sources of bias using a causal model, proposed approaches rely heavily on assumptions about the data generating process, i.e., the faithfulness and ignorability assumptions. This leads to open discussions on (1) how to fully characterize causal definitions of fairness, (2) how, if possible, to improve the applicability of such definitions, and (3) what constitutes a suitable causal framing of bias from a sociotechnical perspective?

Additionally, while most existing work on causal fairness assumes observed sensitive attribute data, such information is likely to be unavailable due to, for example, data privacy laws or ethical considerations. This observation has motivated initial work on training and evaluating fair algorithms without access to sensitive information and studying the compatibility and trade-offs between fairness and privacy. However, such work has been limited, for the most part, to statistical definitions of fairness raising the question of whether these methods can be extended to causal definitions.

Given the interesting questions that emerge at the intersection of these different fields, this workshop aims to deeply investigate how these different topics relate, but also how they can augment each other to provide better or more suited definitions and mitigation strategies for algorithmic fairness.

Show more
View full details
Workshop

Temporal Graph Learning Workshop

Reihaneh Rabbany · Jian Tang · Michael Bronstein · Shenyang Huang · Meng Qu · Kellin Pelrine · Jianan Zhao · Farimah Poursafaei · Aarash Feizi
Dec 3, 5:30 AM - 3:00 PM Room 399

This workshop bridges the conversation among different areas such as temporal knowledge graph learning, graph anomaly detection, and graph representation learning. It aims to share understanding and techniques to facilitate the development of novel temporal graph learning methods. It also brings together researchers from both academia and industry and connects researchers from various fields aiming to span theories, methodologies, and applications.

Show more
View full details
Workshop

Gaze meets ML

Ismini Lourentzou · Joy T Wu · Satyananda Kashyap · Alexandros Karargyris · Leo Anthony Celi · Ban Kawas · Sachin S Talathi
Dec 3, 5:30 AM - 3:00 PM Room 386

Eye gaze has proven to be a cost-efficient way to collect large-scale physiological data that can reveal the underlying human attentional patterns in real-life workflows, and thus has long been explored as a signal to directly measure human-related cognition in various domains. Physiological data (including but not limited to eye gaze) offer new perception capabilities, which could be used in several ML domains, e.g., egocentric perception, embodied AI, NLP, etc. They can help infer human perception, intentions, beliefs, goals, and other cognition properties that are much needed for human-AI interactions and agent coordination. In addition, large collections of eye-tracking data have enabled data-driven modeling of human visual attention mechanisms, both for saliency or scanpath prediction, with twofold advantages: from the neuroscientific perspective to understand biological mechanisms better, and from the AI perspective to equip agents with the ability to mimic or predict human behavior and improve interpretability and interactions.

With the emergence of immersive technologies, now more than any time there is a need for experts of various backgrounds (e.g., machine learning, vision, and neuroscience communities) to share expertise and contribute to a deeper understanding of the intricacies of cost-efficient human supervision signals (e.g., eye-gaze) and their utilization towards by bridging human cognition and AI in machine learning research and development. The goal of this workshop is to bring together an active research community to collectively drive progress in defining and addressing core problems in gaze-assisted machine learning.

Show more
View full details
Workshop

Reinforcement Learning for Real Life (RL4RealLife) Workshop

Yuxi Li · Emma Brunskill · MINMIN CHEN · Omer Gottesman · Lihong Li · Yao Liu · Zhiwei Tony Qin · Matthew Taylor
Dec 3, 5:30 AM - 3:00 PM Theater A

Discover how to improve the adoption of RL in practice, by discussing key research problems, SOTA, and success stories / insights / lessons w.r.t. practical RL algorithms, practical issues, and applications with leading experts from both academia and industry @ NeurIPS 2022 RL4RealLife workshop.

Show more
View full details
Workshop

Machine Learning and the Physical Sciences

Atilim Gunes Baydin · Adji Bousso Dieng · Emine Kucukbenli · Gilles Louppe · Siddharth Mishra-Sharma · Benjamin Nachman · Brian Nord · Savannah Thais · Anima Anandkumar · Kyle Cranmer · Lenka Zdeborová · Rianne van den Berg
Dec 3, 5:50 AM - 3:00 PM Room 275 - 277

The Machine Learning and the Physical Sciences workshop aims to provide an informal, inclusive and leading-edge venue for research and discussions at the interface of machine learning (ML) and the physical sciences. This interface spans (1) applications of ML in physical sciences (ML for physics), (2) developments in ML motivated by physical insights (physics for ML), and most recently (3) convergence of ML and physical sciences (physics with ML) which inspires questioning what scientific understanding means in the age of complex-AI powered science, and what roles machine and human scientists will play in developing scientific understanding in the future.

Show more
View full details
Workshop

Self-Supervised Learning: Theory and Practice

Ishan Misra · Pengtao Xie · Gul Varol · Yale Song · Yuki Asano · Xiaolong Wang · Pauline Luc
Dec 3, 6:15 AM - 3:00 PM Room 391
View full details
Workshop

I Can’t Believe It’s Not Better: Understanding Deep Learning Through Empirical Falsification

Arno Blaas · Sahra Ghalebikesabi · Javier Antorán · Fan Feng · Melanie F. Pradier · Ian Mason · David Rohde
Dec 3, 6:15 AM - 3:00 PM La Nouvelle Orleans Ballroom C (level 2)

Deep learning has flourished in the last decade. Recent breakthroughs have shown stunning results, and yet, researchers still cannot fully explain why neural networks generalise so well or why some architectures or optimizers work better than others. There is a lack of understanding of existing deep learning systems, which led NeurIPS 2017 test of time award winners Rahimi & Recht to compare machine learning with alchemy and to call for the return of the 'rigour police'.

Despite excellent theoretical work in the field, deep neural networks are so complex that they might not be able to be fully comprehended with theory alone. Unfortunately, the experimental alternative - rigorous work that neither proves a theorem nor proposes a new method - is currently under-valued in the machine learning community.

To change this, this workshop aims to promote the method of empirical falsification.

We solicit contributions which explicitly formulate a hypothesis related to deep learning or its applications (based on first principles or prior work), and then empirically falsify it through experiments. We further encourage submissions to go a layer deeper and investigate the causes of an initial idea not working as expected. This workshop will showcase how negative results offer important learning opportunities for deep learning researchers, possibly far greater than the incremental improvements found in conventional machine learning papers!

Why empirical falsification? In the words of Karl Popper, "It is easy to obtain confirmations, or verifications, for nearly every theory—if we look for confirmations. Confirmations should count only if they are the result of risky predictions."
We believe that similarly to physics, which seeks to understand nature, the complexity of deep neural networks makes any understanding about them built inductively likely to be brittle.

The most reliable method with which physicists can probe nature is by experimentally validating (or not) the falsifiable predictions made by their existing theories. We posit the same could be the case for deep learning and believe that the task of understanding deep neural networks would benefit from adopting the approach of empirical falsification.

Show more
View full details
Workshop

The Fourth Workshop on AI for Humanitarian Assistance and Disaster Response

Ritwik Gupta · Robin Murphy · Eric Heim · Guido Zarrella · Caleb Robinson
Dec 3, 6:15 AM - 2:15 PM Room 398

Humanitarian crises from disease outbreak to war to oppression against disadvantaged groups have threatened people and their communities throughout history. Natural disasters are a single, extreme example of such crises. In the wake of hurricanes, earthquakes, and other such crises, people have ceaselessly sought ways--often harnessing innovation--to provide assistance to victims after disasters have struck.

Through this workshop, we intend to establish meaningful dialogue between the Artificial Intelligence (AI) and Humanitarian Assistance and Disaster Response (HADR) communities. By the end of the workshop, the NeurIPS research community can learn the practical challenges of aiding those in crisis, while the HADR community can get to know the state of art and practice in AI. We seek to establish a pipeline of transitioning the research created by the NeurIPS community to real-world humanitarian issues. We believe such an endeavor is possible due to recent successes in applying techniques from various AI and Machine Learning (ML) disciplines to HADR.

Show more
View full details
Workshop

Symmetry and Geometry in Neural Representations (NeurReps)

Sophia Sanborn · Christian A Shewmake · Simone Azeglio · Arianna Di Bernardo · Nina Miolane
Dec 3, 6:15 AM - 3:00 PM Room 283 - 285

In recent years, there has been a growing appreciation for the importance of modeling the geometric structure in data — a perspective that has developed in both the geometric deep learning and applied geometry communities. In parallel, an emerging set of findings in neuroscience suggests that group-equivariance and the preservation of geometry and topology may be fundamental principles of neural coding in biology.

This workshop will bring together researchers from geometric deep learning and geometric statistics with theoretical and empirical neuroscientists whose work reveals the elegant implementation of geometric structure in biological neural circuitry. Group theory and geometry were instrumental in unifying models of fundamental forces and elementary particles in 20th-century physics. Likewise, they have the potential to unify our understanding of how neural systems form useful representations of the world.

The goal of this workshop is to unify the emerging paradigm shifts towards structured representations in deep networks and the geometric modeling of neural data — while promoting a solid mathematical foundation in algebra, geometry, and topology.

Show more
View full details
Workshop

Machine Learning for Autonomous Driving

Jiachen Li · Nigamaa Nayakanti · Xinshuo Weng · Daniel Omeiza · Ali Baheri · German Ros · Rowan McAllister
Dec 3, 6:20 AM - 3:00 PM Theater B

Welcome to the NeurIPS 2022 Workshop on Machine Learning for Autonomous Driving!

Autonomous vehicles (AVs) offer a rich source of high-impact research problems for the machine learning (ML) community; including perception, state estimation, probabilistic modeling, time series forecasting, gesture recognition, robustness guarantees, real-time constraints, user-machine communication, multi-agent planning, and intelligent infrastructure. Further, the interaction between ML subfields towards a common goal of autonomous driving can catalyze interesting inter-field discussions that spark new avenues of research, which this workshop aims to promote. As an application of ML, autonomous driving has the potential to greatly improve society by reducing road accidents, giving independence to those unable to drive, and even inspiring younger generations with tangible examples of ML-based technology clearly visible on local streets. All are welcome to attend! This will be the 7th NeurIPS workshop in this series. Previous workshops in 2016, 2017, 2018, 2019, 2020, and 2021 enjoyed wide participation from both academia and industry.

Show more
View full details
Workshop

Machine Learning for Systems

Neel Kant · Martin Maas · Azade Nova · Benoit Steiner · Xinlei XU · Dan Zhang
Dec 3, 6:30 AM - 2:30 PM Room 396

Machine Learning (ML) for Systems is an important direction for applying ML in the real world. It has been shown that ML can replace long standing heuristics in computer systems by leveraging supervised learning and reinforcement learning (RL) approaches. The computer systems community recognizes the importance of ML in tackling strenuous multi-objective tasks such as designing new data structures 1, integrated circuits 2,3, or schedulers, as well as implementing control algorithms for applications such as compilers 12,13, databases 8, memory management 9,10 or ML frameworks 6.

General Workshop Direction. This is the fifth iteration of this workshop. In previous editions, we showcased approaches and frameworks to solve problems, bringing together researchers and practitioners at NeurIPS from both ML and systems communities. While breaking new grounds, we encouraged collaborations and development in a broad range of ML for Systems works, many later published in top-tier conferences 6,13,14,15,16,17,18. This year, we plan to continue on this path while expanding our call for paper to encourage emerging works on minimizing energy footprint, reaching carbon neutrality, and using machine learning for system security and privacy.

Focusing the Workshop on Unifying Works. As the field of ML for Systems is maturing, we are adapting the focus and format of the workshop to evolve with it. The community has seen several efforts to consolidate different subfields of ML for Systems 4, 5, 6, 7. However, such efforts need more support. To boost recent advances in shared methodology, tools, and frameworks, this year we will welcome submissions presenting datasets, simulators, or benchmarks that can facilitate research in the area.

Show more
View full details
Workshop

Machine Learning in Structural Biology Workshop

Roshan Rao · Jonas Adler · Namrata Anand · John Ingraham · Sergey Ovchinnikov · Ellen Zhong
Dec 3, 6:30 AM - 3:00 PM Room 288 - 289

In only a few years, structural biology, the study of the 3D structure or shape of proteins and other biomolecules, has been transformed by breakthroughs from machine learning algorithms. Machine learning models are now routinely being used by experimentalists to predict structures that can help answer real biological questions (e.g. AlphaFold), accelerate the experimental process of structure determination (e.g. computer vision algorithms for cryo-electron microscopy), and have become a new industry standard for bioengineering new protein therapeutics (e.g. large language models for protein design). Despite all this progress, there are still many active and open challenges for the field, such as modeling protein dynamics, predicting higher order complexes, pushing towards generalization of protein folding physics, and relating the structure of proteins to the in vivo and contextual nature of their underlying function. These challenges are diverse and interdisciplinary, motivating new kinds of machine learning systems and requiring the development and maturation of standard benchmarks and datasets.

In this exciting time for the field, our workshop, “Machine Learning in Structural Biology” (MLSB), seeks to bring together relevant experts, practitioners, and students across a broad community to focus on these challenges and opportunities. We believe the union of these communities, including the geometric and graph learning communities, NLP researchers, and structural biologists with domain expertise at our workshop can help spur new ideas, spark collaborations, and advance the impact of machine learning in structural biology. Progress at this intersection promises to unlock new scientific discoveries and the ability to design novel medicines.

Show more
View full details
Workshop

Information-Theoretic Principles in Cognitive Systems

Noga Zaslavsky · Mycal Tucker · Sarah Marzen · Irina Higgins · Stephanie Palmer · Samuel J Gershman
Dec 3, 6:30 AM - 3:00 PM Room 357

Many cognitive and neural systems can be described in terms of compression and transmission of information given bounded resources. While information theory, as a principled mathematical framework for characterizing such systems, has been widely applied in neuroscience and machine learning, its role in understanding cognition has traditionally been contested. This traditional view has been changing in recent years, with growing evidence that information-theoretic optimality principles underlie a wide range of cognitive functions, including perception, working memory, language, and decision making. In parallel, there has also been a surge of contemporary information-theoretic approaches in machine learning, enabling large-scale neural-network implementation of information-theoretic models.

These scientific and technological developments open up new avenues for progress toward an integrative computational theory of human and artificial cognition, by leveraging information-theoretic principles as bridges between various cognitive functions and neural representations. This workshop aims to explore these new research directions and bring together researchers from machine learning, cognitive science, neuroscience, linguistics, economics, and potentially other fields, who are interested in integrating information-theoretic approaches that have thus far been studied largely independently of each other. In particular, we aim to discuss questions and exchange ideas along the following directions:

- Understanding human cognition: To what extent can information theoretic principles advance the understanding of human cognition and its emergence from neural systems? What are the key challenges for future research in information theory and cognition? How might tools from machine learning help overcome these challenges? Addressing such questions could lead to progress in computational models that integrate multiple cognitive functions and cross Marr’s levels of analysis.

- Improving AI agents and human-AI cooperation: Given empirical evidence that information theoretic principles may underlie a range of human cognitive functions, how can such principles guide artificial agents toward human-like cognition? How might these principles facilitate human-AI communication and cooperation? Can this help agents learn faster with less data? Addressing such questions could lead to progress in developing better human-like AI systems.

Show more
View full details
Workshop

Broadening Research Collaborations

Sara Hooker · Rosanne Liu · Pablo Samuel Castro · Niloofar Mireshghallah · Sunipa Dev · Benjamin Rosman · João Madeira Araújo · Savannah Thais · Sara Hooker · Sunny Sanyal · Tejumade Afonja · Swapneel Mehta · Tyler Zhu
Dec 3, 6:45 AM - 3:00 PM Room 394-395

This workshop aims to discuss the challenges and opportunities of expanding research collaborations in light of the changing landscape of where, how, and by whom research is produced. Progress toward democratizing AI research has been centered around making knowledge (e.g. class materials), established ideas (e.g. papers), and technologies (e.g. code, compute) more accessible. However, open, online resources are only part of the equation. Growth as a researcher requires not only learning by consuming information individually, but hands-on practice whiteboarding, coding, plotting, debugging, and writing collaboratively, with either mentors or peers. Of course, making "collaborators" more universally accessible is fundamentally more difficult than, say, ensuring all can access arXiv papers because scaling people and research groups is much harder than scaling websites. Can we nevertheless make access to collaboration itself more open?

Show more
View full details
Workshop

Decentralization and Trustworthy Machine Learning in Web3: Methodologies, Platforms, and Applications

Jian Lou · Zhiguang Wang · Chejian Xu · Bo Li · Dawn Song
Dec 3, 6:45 AM - 2:00 PM Room 298 - 299

Recent rapid development of machine learning has largely benefited from algorithmic advances, collection of large-scale datasets, and availability of high-performance computation resources, among others. However, the large volume of collected data and massive information may also bring serious security, privacy, services provisioning, and network management challenges. In order to achieve decentralized, secure, private, and trustworthy machine learning operation and data management in this “data-centric AI” era, the joint consideration of blockchain techniques and machine learning may bring significant benefits and have attracted great interest from both academia and industry. On the one hand, decentralization and blockchain techniques can significantly facilitate training data and machine learning model sharing, decentralized intelligence, security, privacy, and trusted decision-making. On the other hand, Web3 platforms and applications, which are built on blockchain technologies and token-based economics, will greatly benefit from machine learning techniques in resource efficiency, scalability, trustworthy machine learning, and other ML-augmented tools for creators and participants in the end-to-end ecosystems.

This workshop focuses on how future researchers and practitioners should prepare themselves to achieve different trustworthiness requirements, such as security and privacy in machine learning through decentralization and blockchain techniques, as well as how to leverage machine learning techniques to automate some processes in current decentralized systems and ownership economies in Web3. We attempt to share recent related work from different communities, discuss the foundations of trustworthiness problems in machine learning and potential solutions, tools, and platforms via decentralization, blockchain and Web3, and chart out important directions for future work and cross-community collaborations.

Show more
View full details
Workshop

Foundation Models for Decision Making

Sherry Yang · Yilun Du · Jack Parker-Holder · Siddharth Karamcheti · Igor Mordatch · Shixiang (Shane) Gu · Ofir Nachum
Dec 3, 6:50 AM - 2:30 PM Room 291 - 292

Humans acquire vision, language, and decision making abilities through years of experience, arguably corresponding to millions of video frames, audio clips, and interactions with the world. Following this data-driven approach, recent foundation models trained on large and diverse datasets have demonstrated emergent capabilities and fast adaptation to a wide range of downstream vision and language tasks (e.g., BERT, DALL-E, GPT-3, CLIP). Meanwhile in the decision making and reinforcement learning (RL) literature, foundation models have yet to fundamentally shift the traditional paradigm in which an agent learns from its own or others’ collected experience, typically on a single-task and with limited prior knowledge. Nevertheless, there has been a growing body of foundation-model-inspired research in decision making that often involves collecting large amounts of interactive data for self-supervised learning at scale. For instance, foundation models such as BERT and GPT-3 have been applied to modeling trajectory sequences of agent experience, and ever-larger datasets have been curated for learning multimodel, multitask, and generalist agents. These works demonstrate the potential benefits of foundation models on a broad set of decision making applications such as autonomous driving, healthcare systems, robotics, goal-oriented dialogue, robotics, and recommendation systems.

Despite early signs of success, foundation models for decision making remain largely underexplored, underutilized, and lacking solid empirical and theoretical grounding. The challenges faced by existing research are as follows:
1. Many traditional decision making benchmarks are (near-)Markovian (i.e., historyless), and this brings the value of sequence modeling into question. The true power of foundation models may require more complex tasks.
2. Decision making tasks are composed of multi-modal data. At minimum, the states (observations), actions, and rewards of a task are each of different types. Moreover, across different tasks, states and actions can be highly distinct (image vs. text observations, discrete vs. continuous actions).
3. Unlike vision and language, decision making agents can further interact with the environment to collect additional experience in conjunction with learning on existing data. How such an interactive component should be integrated with foundation models is not clear.
4. There already exhibits a large gap between theory and practice in decision making. Hastily applying large models to decision making might create an even greater gap.

Goal of the workshop: The goal of this workshop is to bring together the decision making community and the foundation models community in vision and language to confront the challenges in decision making at scale. The workshop will span high-level discussions on how foundation models can help decision making (if at all) and low-level algorithmic differences of decision, vision, and language which might lead to both opportunities or challenges for applying foundation models to decision making. More specific topics will include but are not limited to:
1. Common or distinct properties of vision, language, and decision making tasks that reassure or challenge the value of foundation models in decision making.
2. Introduction or proposals for new benchmarks to facilitate better research for foundation models for decision making.
3. How decision making can benefit from techniques already popular for foundation models, such as autoregressive sequence models, diffusion models, contrastive pretraining, masked autoencoders, prompting, etc.
4. Lessons learned from developing engineering frameworks, datasets and benchmarks, and evaluation protocols for foundation models in vision and language, and how can the decision making community benefit from these lessons.
5. How foundation models relate to the theoretical foundations of sequential decision making.

Show more
View full details
Workshop

Transfer Learning for Natural Language Processing

Alon Albalak · Colin Raffel · Chunting Zhou · Deepak Ramachandran · Xuezhe Ma · Sebastian Ruder
Dec 3, 6:50 AM - 3:00 PM Theater C

Transfer learning from large pre-trained language models (PLM) has become the de-facto method for a wide range of natural language processing tasks. Current transfer learning methods, combined with PLMs, have seen outstanding successes in transferring knowledge to new tasks, domains, and even languages. However, existing methods, including fine-tuning, in-context learning, parameter-efficient tuning, semi-parametric models with knowledge augmentation, etc., still lack consistently good performance across different tasks, domains, varying sizes of data resources, and diverse textual inputs.

This workshop aims to invite researchers from different backgrounds to share their latest work in efficient and robust transfer learning methods, discuss challenges and risks of transfer learning models when deployed in the wild, understand positive and negative transfer, and also debate over future directions.

Show more
View full details
Workshop

MATH-AI: Toward Human-Level Mathematical Reasoning

Pan Lu · Swaroop Mishra · Sean Welleck · Yuhuai Wu · Hannaneh Hajishirzi · Percy Liang
Dec 3, 6:55 AM - 3:00 PM Room 293 - 294

Mathematical reasoning is a unique aspect of human intelligence and a fundamental building block for scientific and intellectual pursuits. However, learning mathematics is often a challenging human endeavor that relies on expert instructors to create, teach and evaluate mathematical material. From an educational perspective, AI systems that aid in this process offer increased inclusion and accessibility, efficiency, and understanding of mathematics. Moreover, building systems capable of understanding, creating, and using mathematics offers a unique setting for studying reasoning in AI. This workshop will investigate the intersection of mathematics education and AI.

Show more
View full details
Workshop

OPT 2022: Optimization for Machine Learning

Courtney Paquette · Sebastian Stich · Quanquan Gu · Cristóbal Guzmán · John Duchi
Dec 3, 6:55 AM - 2:50 PM Room 295 - 296

OPT 2022 will bring experts in optimization to share their perspectives while leveraging crossover experts in ML to share their views and recent advances. OPT 2022 honors this tradition of bringing together people from optimization and from ML in order to promote and generate new interactions between the two communities.

To foster the spirit of innovation and collaboration, a goal of this workshop, OPT 2022 will focus the contributed talks on research in Reliable Optimization Methods for ML. Many optimization algorithms for ML were originally developed with the goal of handling computational constraints (e.g., stochastic gradient based algorithms). Moreover, the analyses of these algorithms followed the classical optimization approach where one measures the performances of algorithms based on (i) the computation cost and (ii) convergence for any input into the algorithm. As engineering capabilities increase and the wide adoption of ML into many real world usages, practitioners of ML are seeking optimization algorithms that go beyond finding the minimizer with the fastest algorithm. They want reliable methods that solve real-world complications that arise. For example, increasingly bad actors are attempting to fool models with deceptive data. This leads to questions such as what algorithms are more robust to adversarial attacks and can one design new algorithms that can thwart these attacks? The latter question motivates a new area of optimization focusing on game-theoretic environments, that is, environments where there are competing forces at play and devising guarantees. Beyond this, a main reason for the success of ML is that optimization algorithms seemingly generate points that learn from training data; that is, we want minimizers of training data to provide meaningful interpretations on new data (generalization) yet we do not understand what features (e.g., loss function, algorithm, depth of the architectures (deep learning), and/or training samples) yield better generalization properties. These new areas of solving practical ML problems and their deep ties to the optimization community warrants a necessary discussion between the two communities. Specifically, we aim to discuss the meanings of generalization as well as the challenges facing real-world applications of ML and the new paradigms for optimizers seeking to solve them.

Plenary Speakers: All invited speakers have agreed to coming in-person to the workshop.

* Niao He (ETH, Zurich, assistant professor)

* Zico Kolter (Carnegie Mellon University, associate professor)

* Lorenzo Rosasco (U Genova/MIT, assistant professor)

* Katya Scheinberg (Cornell, full professor)

* Aaron Sidford (Stanford, assistant professor)

Show more
View full details
Workshop

InterNLP: Workshop on Interactive Learning for Natural Language Processing

Kianté Brantley · Soham Dan · Ji Ung Lee · Khanh Nguyen · Edwin Simpson · Alane Suhr · Yoav Artzi
Dec 3, 7:00 AM - 2:55 PM Room 397

Interactive machine learning studies algorithms that learn from data collected through interaction with either a computational or human agent in a shared environment, through feedback on model decisions. In contrast to the common paradigm of supervised learning, IML does not assume access to pre-collected labeled data, thereby decreasing data costs. Instead, it allows systems to improve over time, empowering non-expert users to provide feedback. IML has seen wide success in areas such as video games and recommendation systems.
Although most downstream applications of NLP involve interactions with humans - e.g., via labels, demonstrations, corrections, or evaluation - common NLP models are not built to learn from or adapt to users through interaction. There remains a large research gap that must be closed to enable NLP systems that adapt on-the-fly to the changing needs of humans and dynamic environments through interaction.

Show more
View full details
Workshop

Workshop on Distribution Shifts: Connecting Methods and Applications

Chelsea Finn · Fanny Yang · Hongseok Namkoong · Masashi Sugiyama · Jacob Eisenstein · Jonas Peters · Rebecca Roelofs · Shiori Sagawa · Pang Wei Koh · Yoonho Lee
Dec 3, 7:00 AM - 3:00 PM Room 388 - 390

This workshop brings together domain experts and ML researchers working on mitigating distribution shifts in real-world applications.

Distribution shifts—where a model is deployed on a data distribution different from what it was trained on—pose significant robustness challenges in real-world ML applications. Such shifts are often unavoidable in the wild and have been shown to substantially degrade model performance in applications such as biomedicine, wildlife conservation, sustainable development, robotics, education, and criminal justice. For example, models can systematically fail when tested on patients from different hospitals or people from different demographics.

This workshop aims to convene a diverse set of domain experts and methods-oriented researchers working on distribution shifts. We are broadly interested in methods, evaluations and benchmarks, and theory for distribution shifts, and we are especially interested in work on distribution shifts that arise naturally in real-world application contexts. Examples of relevant topics include, but are not limited to:
- Examples of real-world distribution shifts in various application areas. We especially welcome applications that are not widely discussed in the ML research community, e.g., education, sustainable development, and conservation. We encourage submissions that characterize distribution shifts and their effects in real-world applications; it is not at all necessary to propose a solution that is algorithmically novel.
- Methods for improving robustness to distribution shifts. Relevant settings include domain generalization, domain adaptation, and subpopulation shifts, and we are interested in a wide range of approaches, from uncertainty estimation to causal inference to active data collection. We welcome methods that can work across a variety of shifts, as well as more domain-specific methods that incorporate prior knowledge on the types of shifts we wish to be robust on. We encourage evaluating these methods on real-world distribution shifts.
- Empirical and theoretical characterization of distribution shifts. Distribution shifts can vary widely in the way in which the data distribution changes, as well as the empirical trends they exhibit. What empirical trends do we observe? What empirical or theoretical frameworks can we use to characterize these different types of shifts and their effects? What kinds of theoretical settings capture useful components of real-world distribution shifts?
- Benchmarks and evaluations. We especially welcome contributions for subpopulation shifts, as they are underrepresented in current ML benchmarks. We are also interested in evaluation protocols that move beyond the standard assumption of fixed training and test splits -- for which applications would we need to consider other forms of shifts, such as streams of continually-changing data or feedback loops between models and data?

Show more
View full details
Workshop

A causal view on dynamical systems

Sören Becker · Alexis Bellot · Cecilia Casolo · Niki Kilbertus · Sara Magliacane · Yuyang (Bernie) Wang
Dec 3, 7:00 AM - 3:00 PM Room 387
View full details
Workshop

Human Evaluation of Generative Models

Divyansh Kaushik · Jennifer Hsia · Jessica Huynh · Yonadav Shavit · Samuel Bowman · Ting-Hao Huang · Douwe Kiela · Zachary Lipton · Eric Michael Smith
Dec 3, 7:30 AM - 2:15 PM Room 290
View full details
Workshop

Vision Transformers: Theory and applications

Fahad Shahbaz Khan · Gul Varol · Salman Khan · Ping Luo · Rao Anwer · Ashish Vaswani · Hisham Cholakkal · Niki Parmar · Joost van de Weijer · Mubarak Shah
Dec 8, 11:00 PM - 6:00 AM Virtual

Transformer models have demonstrated excellent performance on a diverse set of computer vision applications ranging from classification to segmentation on various data modalities such as images, videos, and 3D data. The goal of this workshop is to bring together computer vision and machine learning researchers working towards advancing the theory, architecture, and algorithmic design for vision transformer models, as well as the practitioners utilizing transformer models for novel applications and use cases.

The workshop’s motivation is to narrow the gap between the research advancements in transformer designs and applications utilizing transformers for various computer vision applications. The workshop also aims to widen the adaptation of transformer models for various vision-related industrial applications. We are interested in papers reporting their experimental results on the utilization of transformers for any application of computer vision, challenges they have faced, and their mitigation strategy on topics like, but not limited to image classification, object detection, segmentation, human-object interaction detection, scene understanding based on 3D, video, and multimodal inputs.

Show more
View full details
Workshop

Challenges in Deploying and Monitoring Machine Learning Systems

Alessandra Tosi · Andrei Paleyes · Christian Cabrera · Fariba Yousefi · S Roberts
Dec 9, 1:00 AM - 11:15 AM Virtual

The goal of this event is to bring together people from different communities with the common interest in the Deployment of Machine Learning Systems.

With the dramatic rise of companies dedicated to providing Machine Learning software-as-a-service tools, Machine Learning has become a tool for solving real world problems that is increasingly more accessible in many industrial and social sectors. With the growth in number of deployments, also grows the number of known challenges and hurdles that practitioners face along the deployment process to ensure the continual delivery of good performance from deployed Machine Learning systems. Such challenges can lie in adoption of ML algorithms to concrete use cases, discovery and quality of data, maintenance of production ML systems, as well as ethics.

Show more
View full details
Workshop

The Symbiosis of Deep Learning and Differential Equations II

Michael Poli · Winnie Xu · Estefany Kelly Buchanan · Maryam Hosseini · Luca Herranz-Celotti · Martin Magill · Ermal Rrapaj · Qiyao Wei · Stefano Massaroli · Patrick Kidger · Archis Joglekar · Animesh Garg · David Duvenaud
Dec 9, 4:00 AM - 10:55 AM Virtual

In recent years, there has been a rapid increase of machine learning applications in computational sciences, with some of the most impressive results at the interface of deep learning (DL) and differential equations (DEs). DL techniques have been used in a variety of ways to dramatically enhance the effectiveness of DE solvers and computer simulations. These successes have widespread implications, as DEs are among the most well-understood tools for the mathematical analysis of scientific knowledge, and they are fundamental building blocks for mathematical models in engineering, finance, and the natural sciences. Conversely, DL algorithms based on DEs--such as neural differential equations and continuous-time diffusion models--have also been successfully employed as deep learning models. Moreover, theoretical tools from DE analysis have been used to glean insights into the expressivity and training dynamics of mainstream deep learning algorithms.

This workshop will aim to bring together researchers with backgrounds in computational science and deep learning to encourage intellectual exchanges, cultivate relationships and accelerate research in this area. The scope of the workshop spans topics at the intersection of DL and DEs, including theory of DL and DEs, neural differential equations, solving DEs with neural networks, and more.

Show more
View full details
Workshop

Workshop on neuro Causal and Symbolic AI (nCSI)

Matej Zečević · Devendra Singh Dhami · Christina Winkler · Thomas Kipf · Robert Peharz · Petar Veličković
Dec 9, 4:00 AM - 1:00 PM Virtual

Understanding causal interactions is central to human cognition and thereby a central quest in science, engineering, business, and law. Developmental psychology has shown that children explore the world in a similar way to how scientists do, asking questions such as “What if?” and “Why?” AI research aims to replicate these capabilities in machines. Deep learning in particular has brought about powerful tools for function approximation by means of end-to-end traininable deep neural networks. This capability has been corroborated by tremendous success in countless applications. However, their lack of interpretability and reasoning capabilities prove to be a hindrance towards building systems of human-like ability. Therefore, enabling causal reasoning capabilities in deep learning is of critical importance for research on the path towards human-level intelligence. First steps towards neural-causal models exist and promise a vision of AI systems that perform causal inferences as efficiently as modern-day neural models. Similarly, classical symbolic methods are being revisited and reintegrated into current systems to allow for reasoning capabilities beyond pure pattern recognition. The Pearlian formalization to causality has revealed a theoretically sound and practically strict hierarchy of reasoning that serves as a helpful benchmark for evaluating the reasoning capabilities of neuro-symbolic systems.

Our aim is to bring together researchers interested in the integration of research areas in artificial intelligence (general machine and deep learning, symbolic and object-centric methods, and logic) with rigorous formalizations of causality with the goal of developing next-generation AI systems.

Show more
View full details
Workshop

Learning Meaningful Representations of Life

Elizabeth Wood · Adji Bousso Dieng · Aleksandrina Goeva · Alex X Lu · Anshul Kundaje · Chang Liu · Debora Marks · Ed Boyden · Eli N Weinstein · Lorin Crawford · Mor Nitzan · Rebecca Boiarsky · Romain Lopez · Tamara Broderick · Ray Jones · Wouter Boomsma · Yixin Wang · Stephen Ra
Dec 9, 4:30 AM - 2:00 PM Virtual

All events will be in a non-NeurIPS Zoom and on Gather.Town, without embedded streaming. Links below.

Show more
View full details
Workshop

HCAI@NeurIPS 2022, Human Centered AI

Michael Muller · Plamen P Angelov · Hal Daumé III · Shion Guha · Q.Vera Liao · Nuria Oliver · David Piorkowski
Dec 9, 5:00 AM - 12:00 PM Virtual
View full details
Workshop

Empowering Communities: A Participatory Approach to AI for Mental Health

Andrey Kormilitzin · Dan Joyce · Nenad Tomasev · Kevin McKee
Dec 9, 6:40 AM - 3:00 PM Virtual

Mental illness is the complex product of biological, psychological and social factors that foreground issues of under-representation, institutional and societal inequalities, bias and intersectionality in determining the outcomes for people affected by these disorders – the very same priorities that AI/ML fairness has begun to attend to in the past few years.

Despite the history of impoverished material investment in mental health globally, in the past decade, research practices in mental health have begun to embrace patient and citizen activism and the field has emphasised stakeholder (patients and public) participation as a central and absolutely necessary component of basic, translational and implementation science. This positions mental healthcare as something of an exemplar of participatory practices in healthcare from which technologists, engineers and scientists can learn.

The aim of the workshop is to address sociotechnical issues in healthcare AI/ML that are idiosyncratic to mental health.

Uniquely, this workshop will invite and bring together practitioners and researchers rarely found together “in the same room”, including:
- Under-represented groups with special interest in mental health and illness
- Clinical psychiatry, psychology and allied mental health professions
- Technologists, scientists and engineers from the machine learning communities

We will create an open, dialogue-focused exchange of expertise to advance mental health using data science and AI/ML with the expected impact of addressing the aforementioned issues and attempting to develop consensus on the open challenges.

Show more
View full details
Workshop

Trustworthy and Socially Responsible Machine Learning

Huan Zhang · Linyi Li · Chaowei Xiao · Zico Kolter · Anima Anandkumar · Bo Li
Dec 9, 6:45 AM - 4:15 PM Virtual

To address these negative societal impacts of ML, researchers have looked into different principles and constraints to ensure trustworthy and socially responsible machine learning systems. This workshop makes the first attempt towards bridging the gap between security, privacy, fairness, ethics, game theory, and machine learning communities and aims to discuss the principles and experiences of developing trustworthy and socially responsible machine learning systems. The workshop also focuses on how future researchers and practitioners should prepare themselves for reducing the risks of unintended behaviors of sophisticated ML models.

This workshop aims to bring together researchers interested in the emerging and interdisciplinary field of trustworthy and socially responsible machine learning from a broad range of disciplines with different perspectives to this problem. We attempt to highlight recent related work from different communities, clarify the foundations of trustworthy machine learning, and chart out important directions for future work and cross-community collaborations.

Show more
View full details
Workshop

Graph Learning for Industrial Applications: Finance, Crime Detection, Medicine and Social Media

Manuela Veloso · John Dickerson · Senthil Kumar · Eren K. · Jian Tang · Jie Chen · Peter Henstock · Susan Tibbs · Anisoara Calinescu · Naftali Cohen · C. Bayan Bruss · Armineh Nourbakhsh
Dec 9, 6:50 AM - 4:10 PM Virtual

Graph structures provide unique opportunities in representing complex systems that are challenging to model otherwise, due to a variety of complexities such as large number of entities, multiple entity types, different relationship types, and diverse patterns.

This provides unique opportunities in using graph and graph-based solutions within a wide array of industrial applications. In financial services,graph representations are used to model markets’ transactional systems and detect financial crime. In the healthcare field, knowledge graphs have gained traction as the best way of representing the interdisciplinary scientific knowledge across biology, chemistry, pharmacology, toxicology, and medicine. By mining scientific literature and combining it with various data sources, the knowledge graphs provide an up-to-date framework for both human and computer intelligence to generate new scientific hypotheses, drug strategies, and ideas.

In addition to the benefits of graph representation, graph native machine-learning solutions such as graph neural networks, convolutional networks, and others have been implemented effectively in many industrial systems. In finance, graph dynamics have been studied to capture emerging phenomena in volatile markets. In healthcare, these techniques have extended the traditional network analysis approaches to enable link prediction. A recent example was BenevolentAI’s knowledge graph prediction that a baricitinib (now in clinical trials), a rheumatoid arthritis drug by Eli Lily, could mitigate COVID-19’s “cytokine storm”.

Graph representations allow researchers to model inductive biases, encode domain expertise, combine explicit knowledge with latent semantics, and mine patterns at scale. This facilitates explainability, robustness, transparency, and adaptability—aspects which are all uniquely important to the financial services industry as well as the (bio)medical domain. Recent work on numeracy, tabular data modeling, multimodal reasoning, and differential analysis, increasingly rely on graph-based learning to improve performance and generalizability. Additionally, many financial datasets naturally lend themselves to graph representation—from supply-chains and shipping routes to investment networks and business hierarchies. Similarly, much of the healthcare space is best described by complex networks from the micro level of chemical synthesis protocols and biological pathways to the macro level of public health.

In recent years, knowledge graphs have shown promise in furthering the capabilities of graph representations and learning techniques with unique opportunities such as reasoning. Reasoning over knowledge graphs enables exciting possibilities in complementing the pattern detection capabilities of the traditional machine learning solutions with interpretability and reasoning potential.

This path forward highlights the importance of graphs in the future of AI and machine learning systems. This workshop highlights the current and emerging opportunities from the perspective of industrial applications such as financial services, healthcare, (bio)medicine, and crime detection. The workshop is an opportunity for academic and industrial AI researchers to come together and explore shared challenges, new topics, and emerging opportunities.

Show more
View full details
Workshop

Tackling Climate Change with Machine Learning

Peetak Mitra · Maria João Sousa · Mark Roth · Jan Drgona · Emma Strubell · Yoshua Bengio
Dec 9, 7:00 AM - 6:00 PM Virtual

The focus of this workshop is the use of machine learning to help address climate change, encompassing mitigation efforts (reducing greenhouse gas emissions), adaptation measures (preparing for unavoidable consequences), and climate science (our understanding of the climate and future climate predictions). Specifically, we aim to: (1) showcase high-impact applications of ML to climate change mitigation, adaptation, and climate science, (2) discuss related research directions to which the ML community can contribute, (3) brainstorm mechanisms to scale early academic research to successful, viable deployments, and (4) encourage fruitful collaboration between the ML community and a diverse set of researchers and practitioners from climate change-related fields. Building on our past workshops on this topic, this workshop particularly aims to explore the theme of climate change-informed metrics for AI, focusing both on (a) the domain-specific metrics by which AI systems should be evaluated when used as a tool for climate action, and (b) the climate change-related implications of using AI more broadly.

Show more
View full details
Workshop

Workshop on Machine Learning Safety

Dan Hendrycks · Victoria Krakovna · Dawn Song · Jacob Steinhardt · Nicholas Carlini
Dec 9, 7:00 AM - 2:00 PM Virtual

Designing systems to operate safely in real-world settings is a topic of growing interest in machine learning. As ML becomes more capable and widespread, long-term and long-tail safety risks will grow in importance. To make the adoption of ML more beneficial, various aspects of safety engineering and oversight need to be proactively addressed by the research community. This workshop will bring together researchers from machine learning communities to focus on research topics in Robustness, Monitoring, Alignment, and Systemic Safety.
* Robustness is designing systems to be reliable in the face of adversaries and highly unusual situations.
* Monitoring is detecting anomalies, malicious use, and discovering unintended model functionality.
* Alignment is building models that represent and safely optimize difficult-to-specify human values.
* Systemic Safety is using ML to address broader risks related to how ML systems are handled, such as cyberattacks, facilitating cooperation, or improving the decision-making of public servants.

Show more
View full details
Workshop

5th Robot Learning Workshop: Trustworthy Robotics

Alex Bewley · Roberto Calandra · Anca Dragan · Igor Gilitschenski · Emily Hannigan · Masha Itkina · Hamidreza Kasaei · Jens Kober · Danica Kragic · Nathan Lambert · Julien PEREZ · Fabio Ramos · Ransalu Senanayake · Jonathan Tompson · Vincent Vanhoucke · Markus Wulfmeier
Dec 9, 7:00 AM - 7:00 PM Virtual

Machine learning (ML) has been one of the premier drivers of recent advances in robotics research and has made its way into impacting several real-world robotic applications in unstructured and human-centric environments, such as transportation, healthcare, and manufacturing. At the same time, robotics has been a key motivation for numerous research problems in artificial intelligence research, from efficient algorithms to robust generalization of decision models. However, there are still considerable obstacles to fully leveraging state-of-the-art ML in real-world robotics applications. For capable robots equipped with ML models, guarantees on the robustness and additional analysis of the social implications of these models are required for their utilization in real-world robotic domains that interface with humans (e.g. autonomous vehicles, and tele-operated or assistive robots).

To support the development of robots that are safely deployable among humans, the field must consider trustworthiness as a central aspect in the development of real-world robot learning systems. Unlike many other applications of ML, the combined complexity of physical robotic platforms and learning-based perception-action loops presents unique technical challenges. These challenges include concrete technical problems such as very high performance requirements, explainability, predictability, verification, uncertainty quantification, and robust operation in dynamically distributed, open-set domains. Since robots are developed for use in human environments, in addition to these technical challenges, we must also consider the social aspects of robotics such as privacy, transparency, fairness, and algorithmic bias. Both technical and social challenges also present opportunities for robotics and ML researchers alike. Contributing to advances in the aforementioned sub-fields promises to have an important impact on real-world robot deployment in human environments, building towards robots that use human feedback, indicate when their model is uncertain, and are safe to operate autonomously in safety-critical settings such as healthcare and transportation.

This year’s robot learning workshop aims at discussing unique research challenges from the lens of trustworthy robotics. We adopt a broad definition of trustworthiness that highlights different application domains and the responsibility of the robotics and ML research communities to develop “robots for social good.” Bringing together experts with diverse backgrounds from the ML and robotics communities, the workshop will offer new perspectives on trust in the context of ML-driven robot systems.

Scope of contributions:

Specific areas of interest include but are not limited to:

* epistemic uncertainty estimation in robotics;
* explainable robot learning;
* domain adaptation and distribution shift in robot learning;
* multi-modal trustworthy sensing and sensor fusion;
* safe deployment for applications such as agriculture, space, science, and healthcare;
* privacy aware robotic perception;
* information system security in robot learning;
* learning from offline data and safe on-line learning;
* simulation-to-reality transfer for safe deployment;
* robustness and safety evaluation;
* certifiability and performance guarantees;
* robotics for social good;
* safe robot learning with humans in the loop;
* algorithmic bias in robot learning;
* ethical robotics.

Show more
View full details
Workshop

Workshop on Machine Learning for Creativity and Design

Tom White · Yingtao Tian · Lia Coleman · Samaneh Azadi
Dec 9, 7:15 AM - 7:00 PM Virtual
View full details
Workshop

Deep Reinforcement Learning Workshop

Karol Hausman · Qi Zhang · Matthew Taylor · Martha White · Suraj Nair · Manan Tomar · Risto Vuorio · Ted Xiao · Zeyu Zheng · Manan Tomar
Dec 9, 8:25 AM - 5:35 PM Virtual

In recent years, the use of deep neural networks as function approximators has enabled researchers to extend reinforcement learning techniques to solve increasingly complex control tasks. The emerging field of deep reinforcement learning has led to remarkable empirical results in rich and varied domains like robotics, strategy games, and multi-agent interactions. This workshop will bring together researchers working at the intersection of deep learning and reinforcement learning, and it will help interested researchers outside of the field gain a high-level view about the current state of the art and potential directions for future contributions.

Show more
View full details
Workshop

Cultures of AI and AI for Culture

Alex Hanna · Rida Qadri · Fernando Diaz · Nick Seaver · Morgan Scheuerman
Dec 9, 9:00 AM - 3:30 PM Virtual

Panels 1b and 2b will be hosted in a separate zoom room,

Contributed Panel 1b: Frameworks of AI/Culture entanglement
Panel Zoom Link:
https://us06web.zoom.us/j/85234340757?pwd=TEw1UkpYbmZWQktLSjc5M241WHd6QT09
Password: fishvale

Contributed Panel 2b: Theorizing AI/Culture entanglement
Panel Zoom Link:
https://us06web.zoom.us/j/85234340757?pwd=TEw1UkpYbmZWQktLSjc5M241WHd6QT09
Password: fishvale

Show more
View full details