Skip to yearly menu bar Skip to main content



Workshops
Jessica Schrouff · Awa Dieng · Golnoosh Farnadi · Mark Kwegyir-Aggrey · Miriam Rateike

Trustworthy machine learning (ML) encompasses multiple fields of research, including (but not limited to) robustness, algorithmic fairness, interpretability and privacy. Recently, relationships between techniques and metrics used across different fields of trustworthy ML have emerged, leading to interesting work at the intersection of algorithmic fairness, robustness, and causality.

On one hand, causality has been proposed as a powerful tool to address the limitations of initial statistical definitions of fairness. However, questions have emerged regarding the applicability of such approaches in practice and the suitability of a causal framing for studies of bias and discrimination. On the other hand, the Robustness literature has surfaced promising approaches to improve fairness in ML models. For instance, parallels can be shown between individual fairness and local robustness guarantees. In addition, the interactions between fairness and robustness can help us understand how fairness guarantees hold under distribution shift or adversarial/poisoning attacks.

After a first edition of this workshop that focused on causality and interpretability, we will turn to the intersectionality between algorithmic fairness and recent techniques in causality and robustness. In this context, we will investigate how these different topics relate, but also how they can augment each other to provide better or more suited …

Erin Grant · Fábio Ferreira · Frank Hutter · Jonathan Richard Schwarz · Joaquin Vanschoren · Huaxiu Yao

Recent years have seen rapid progress in meta-learning methods, which transfer knowledge across tasks and domains to efficiently learn new tasks, optimize the learning process itself, and even generate new learning methods from scratch. Meta-learning can be seen as the logical conclusion of the arc that machine learning has undergone in the last decade, from learning classifiers, to learning representations, and finally to learning algorithms that themselves acquire representations, classifiers, and policies for acting in environments. In practice, meta-learning has been shown to yield new state-of-the-art automated machine learning methods, novel deep learning architectures, and substantially improved one-shot learning systems. Moreover, to improve one’s own learning capabilities through experience can also be viewed as a hallmark of intelligent beings, and neuroscience shows a strong connection between human and reward learning and the growing sub-field of meta-reinforcement learning.

Courtney Paquette · Quanquan Gu · Oliver Hinder · Katya Scheinberg · Sebastian Stich · Martin Takac

OPT 2021 will bring experts in optimization to share their perspectives while leveraging crossover experts in ML to share their views and recent advances. OPT 2021 honors this tradition of bringing together people from optimization and from ML in order to promote and generate new interactions between the two communities.

To foster the spirit of innovation and collaboration, a goal of this workshop, OPT 2021 will focus the contributed talks on research in “Beyond Worst-case Complexity”. Classical optimization analyses measure the performances of algorithms based on (1). the computation cost and (2). convergence for any input into the algorithm. Yet algorithms with worse traditional complexity (e.g. SGD and its variants, ADAM, etc), are increasingly popular in practice for training deep neural networks and other ML tasks. This leads to questions such as what are good modeling assumptions for ML problems to measure an optimization algorithm’s success and how can we leverage these to better understand the performances of known (and new) algorithms. For instance, typical optimization problems in ML may be better conditioned than their worst-case counterparts in part because the problems are highly structured and/or high-dimensional (large number of features/samples). One could leverage this observation to design algorithms with …

Zeynep Akata · Lucas Beyer · Sanghyuk Chun · A. Sophia Koepke · Diane Larlus · Seong Joon Oh · Rafael Rezende · Sangdoo Yun · Xiaohua Zhai

Since its release in 2010, ImageNet has played an instrumental role in the development of deep learning architectures for computer vision, enabling neural networks to greatly outperform hand-crafted visual representations. ImageNet also quickly became the go-to benchmark for model architectures and training techniques which eventually reach far beyond image classification. Today’s models are getting close to “solving” the benchmark. Models trained on ImageNet have been used as strong initialization for numerous downstream tasks. The ImageNet dataset has even been used for tasks going way beyond its initial purpose of training classification model. It has been leveraged and reinvented for tasks such as few-shot learning, self-supervised learning and semi-supervised learning. Interesting re-creation of the ImageNet benchmark enables the evaluation of novel challenges like robustness, bias, or concept generalization. More accurate labels have been provided. About 10 years later, ImageNet symbolizes a decade of staggering advances in computer vision, deep learning, and artificial intelligence.

We believe now is a good time to discuss what’s next: Did we solve ImageNet? What are the main lessons learnt thanks to this benchmark? What should the next generation of ImageNet-like benchmarks encompass? Is language supervision a promising alternative? How can we reflect on the diverse requirements …

Samuel Albanie · João Henriques · Luca Bertinetto · Alex Hernandez-Garcia · Hazel Doughty · Gul Varol

Machine learning research has benefited considerably from the adoption of standardised public benchmarks. While the importance of these benchmarks is undisputed, we argue against the current incentive system and its heavy reliance upon performance as a proxy for scientific progress. The status quo incentivises researchers to “beat the state of the art”, potentially at the expense of deep scientific understanding and rigorous experimental design. Since typically only positive results are rewarded, the negative results inevitably encountered during research are often omitted, allowing many other groups to unknowingly and wastefully repeat these negative findings.

Pre-registration is a publishing and reviewing model that aims to address these issues by changing the incentive system. A pre-registered paper is a regular paper that is submitted for peer-review without any experimental results, describing instead an experimental protocol to be followed after the paper is accepted. This implies that it is important for the authors to make compelling arguments from theory or past published evidence. As for reviewers, they must assess these arguments together with the quality of the experimental design, rather than comparing numeric results. While pre-registration has been highly adopted in fields such as medicine and psychology, there is little such experience inthe machine …

Aaron Schein · Melanie F. Pradier · Jessica Forde · Stephanie Hyland · Francisco Ruiz

Beautiful ideas have shaped scientific progress throughout history. As Paul Dirac said, “If one is working from the point of view of getting beauty in one's equations, (…), one is on a sure line of progress.” However, beautiful ideas are often overlooked in a research environment that heavily emphasizes state-of-the-art (SOTA) results, where the worth of scientific works is defined by their immediate utility and quantitative superiority instead of their creativity, diversity, and elegance. This workshop will explore gaps between the form and function (or, the intrinsic and extrinsic value) of ideas in ML and AI research. We will explore that disconnect by asking researchers to submit their “beautiful” ideas that don’t (yet) “work”. We will ask them to explain why their idea has intrinsic value, and hypothesize why it hasn’t (yet) shown its extrinsic value. In doing so, we will create a space for researchers to help each other get their “beautiful” ideas “working”.

David Bruns-Smith · Arthur Gretton · Limor Gultchin · Niki Kilbertus · Krikamol Muandet · Evan Munro · Angela Zhou

The Machine Learning Meets Econometrics (MLECON) workshop will serve as an interface for researchers from machine learning and econometrics to understand challenges and recognize opportunities that arise from the synergy between these two disciplines as well as to exchange new ideas that will help propel the fields. Our one-day workshop will consist of invited talks from world-renowned experts, shorter talks from contributed authors, a Gather.Town poster session, and an interdisciplinary panel discussion. To encourage cross-over discussion among those publishing in different venues, the topic of our panel discussion will be “Machine Learning in Social Systems: Challenges and Opportunities from Program Evaluation”. It was designed to highlight the complexity of evaluating social and economic programs as well as shortcomings of current approaches in machine learning and opportunities for methodological innovation. These challenges include more complex environments (markets, equilibrium, temporal considerations) and behavior (heterogeneity, delayed effects, unobserved confounders, strategic response). Our team of organizers and program committees is diverse in terms of gender, race, affiliations, country of origin, disciplinary background, and seniority levels. We aim to convene a broad variety of viewpoints on methodological axes (nonparametrics, machine learning, econometrics) as well as areas of application. Our invited speakers and panelists are leading …

Jason Altschuler · Charlotte Bunne · Laetitia Chapel · Marco Cuturi · Rémi Flamary · Gabriel Peyré · Alexandra Suvorikova

Over the last few years, optimal transport (OT) has quickly become a central topic in machine learning. OT is now routinely used in many areas of ML, ranging from the theoretical use of OT flow for controlling learning algorithms to the inference of high-dimensional cell trajectories in genomics. The Optimal Transport and Machine Learning (OTML) workshop series (in '14, '17, '19) has been instrumental in shaping this research thread. For this new installment of OTML, we aim even bigger by hosting an exceptional keynote speaker, Alessio Figalli, who received the 2018 Fields Medal for his breakthroughs in the analysis of the regularity properties of OT. OTML will be a unique opportunity for cross-fertilization between recent advances in pure mathematics and challenging high-dimensional learning problems.

Payal Chandak · Yuanqi Du · Tianfan Fu · Wenhao Gao · Kexin Huang · Shengchao Liu · Ziming Liu · Gabriel Spadon · Max Tegmark · Hanchen Wang · Adrian Weller · Max Welling · Marinka Zitnik

Machine learning (ML) has revolutionized a wide array of scientific disciplines, including chemistry, biology, physics, material science, neuroscience, earth science, cosmology, electronics, mechanical science. It has solved scientific challenges that were never solved before, e.g., predicting 3D protein structure, imaging black holes, automating drug discovery, and so on. Despite this promise, several critical gaps stifle algorithmic and scientific innovation in "AI for Science": (1) Unrealistic methodological assumptions or directions, (2) Overlooked scientific questions, (3) Limited exploration at the intersections of multiple disciplines, (4) Science of science, (5) Responsible use and development of AI for science.
However, very little work has been done to bridge these gaps, mainly because of the missing link between distinct scientific communities. While many workshops focus on AI for specific scientific disciplines, they are all concerned with the methodological advances within a single discipline (e.g., biology) and are thus unable to examine the crucial questions mentioned above. This workshop will fulfill this unmet need and facilitate community building; with hundreds of ML researchers beginning projects in this area, the workshop will bring them together to consolidate the fast-growing area of "AI for Science" into a recognized field.

Mehdi Rezaghoizadeh · Lili Mou · Yue Dong · Pascal Poupart · Ali Ghodsi · Qun Liu

This workshop aims at introducing some fundamental problems in the field of natural language and speech processing which can be of interest to the general machine learning and deep learning community to improve the efficiency of the models, their training and inference. The workshop program offers an interactive platform for gathering experts and talents from academia and industry through different invited keynote talks, panel discussions, paper submissions, reviews, posters, oral presentations and a mentorship program.
This will provide an opportunity to discuss and learn from each other, exchange ideas, build connections, and brainstorm on potential solutions and future collaborations. The topics of this workshop can be of interest for people working on general machine learning, deep learning, optimization, theory and NLP & Speech applications.

Call for Papers
We encourage the NeurIPS community to submit their solutions, ideas, and ongoing work concerning data, model, training, and inference efficiency for NLP and speech processing. The scope of this workshop includes, but not limited to, the following topics.
(For more details please visit the Workshop Homepage.)

- Efficient Pre-Training and Fine-Tuning
- Model Compression
- Efficient Training
- Data Efficiency
- Edge Intelligence

Important Dates:
- Submission Deadline: September 18, 2021 (AOE) …

Nghia Hoang · Lam Nguyen · Pin-Yu Chen · Tsui-Wei Weng · Sara Magliacane · Bryan Kian Hsiang Low · Anoop Deoras

Federated Learning (FL) has recently emerged as the de facto framework for distributed machine learning (ML) that preserves the privacy of data, especially in the proliferation of mobile and edge devices with their increasing capacity for storage and computation. To fully utilize the vast amount of geographically distributed, diverse and privately owned data that is stored across these devices, FL provides a platform on which local devices can build their own local models whose training processes can be synchronized via sharing differential parameter updates. This was done without exposing their private training data, which helps mitigate the risk of privacy violation, in light of recent policies such as the General Data Protection Regulation (GDPR). Such potential use of FL has since then led to an explosive attention from the ML community resulting in a vast, growing amount of both theoretical and empirical literature that push FL so close to being the new standard of ML as a democratized data analytic service.

Interestingly, as FL comes closer to being deployable in real-world scenarios, it also surfaces a growing set of challenges on trustworthiness, fairness, auditability, scalability, robustness, security, privacy preservation, decentralizability, data ownership and personalizability that are all becoming increasingly important …

Ingmar Posner · Francesca Rossi · Lior Horesh · Steve Fleming · Oiwi Parker Jones · Rohan Paul · Biplav Srivastava · Andrea Loreggia · Marianna Ganapini

Recent progress in artificial intelligence has transformed the way we live, work, and interact. Machines are mastering complex games and are learning increasingly challenging manipulation skills. Yet where are the robot agents that work for, with, and alongside us? These recent successes rely heavily on the ability to learn at scale, often within the confines of a virtual environment. This presents significant challenges for embodied systems acting and interacting in the real world. In contrast, we require our robots and algorithms to operate robustly in real-time, to learn from a limited amount of data, to take mission and sometimes safety-critical decisions, and increasingly even to display a knack for creative problem solving. Achieving this goal will require artificial agents to be able to assess - or introspect - their own competencies and their understanding of the world. Faced with similar complexity,  there are a number of cognitive mechanisms which allow humans to act and interact successfully in the real world. Our ability to assess the quality of our own thinking - that is, our capacity for metacognition - plays a central role in this. We posit that recent advances in machine learning have, for the first time, enabled the effective …

Nikolaos Vasiloglou · Parisa Kordjamshidi · Zenna Tavares · Maximilian Schleich · Nantia Makrynioti · Kirk Pruhs

Relational data represents the vast majority of data present in the enterprise world. Yet none of the ML computations happens inside a relational database where data reside. Instead a lot of time is wasted in denormalizing the data and moving them outside of the databases in order to train models. Relational learning, which takes advantage of relational data structure, has been a 20 year old research area, but it hasn’t been connected with relational database systems, despite the fact that relational databases are the natural space for storing relational data. Recent advances in database research have shown that it is possible to take advantage of the relational structure in data in order to accelerate ML algorithms. Research in relational algebra originating from the database community has shown that it is possible to further accelerate linear algebra operations. Probabilistic Programming has also been proposed as a framework for AI that can be realized in relational databases. Data programming, a mechanism for weak/self supervision is slowly migrating to the natural space of storing data, the database. At last, as models in deep learning grow, several systems are being developed for model management inside relational databases

Jennifer Hu · Noga Zaslavsky · Aida Nematzadeh · Michael Franke · Roger Levy · Noah Goodman

Pragmatics – the aspects of language use that involve reasoning about context and other agents’ goals and belief states – has traditionally been treated as the “wastebasket” of language research (Bar-Hillel 1971), posing a challenge for both cognitive theories and artificial intelligence systems. Ideas from theoretical linguistics have inspired computational applications, such as in referential expression generation (Krahmer and van Deemter, 2012) or computational models of dialogue and recognition of speech or dialogue acts (Bunt and Black, 2000; Jurafsky, 2006; Ginzburg and Fernández, 2010; Bunt, 2016). But only recently, powerful artificial models based on neural or subsymbolic architectures have come into focus that generate or interpret language in pragmatically sophisticated and potentially open-ended ways (Golland et al. 2010, Andreas and Klein 2016, Monroe et al. 2017, Fried et al. 2018), building upon simultaneous advances in the cognitive science of pragmatics (Franke 2011, Frank and Goodman 2012). However, such models still fall short of human pragmatic reasoning in several important aspects. For example, existing approaches are often tailored to, or even trained to excel on, a specific pragmatic task (e.g., Mao et al. (2016) on discriminatory object description), leaving human-like task flexibility unaccounted for. It also remains largely underexplored how pragmatics …

Ellen Zhong · Raphael Townshend · Stephan Eismann · Namrata Anand · Roshan Rao · John Ingraham · Wouter Boomsma · Sergey Ovchinnikov · Bonnie Berger

Structural biology, the study of proteins and other biomolecules through their 3D structures, is a field on the cusp of transformation. While measuring and interpreting biomolecular structures has traditionally been an expensive and difficult endeavor, recent machine-learning based modeling approaches have shown that it will become routine to predict and reason about structure at proteome scales with unprecedented atomic resolution. This broad liberation of 3D structure within bioscience and biomedicine will likely have transformative impacts on our ability to create effective medicines, to understand and engineer biology, and to design new molecular materials and machinery. Machine learning also shows great promise to continue to revolutionize many core technical problems in structural biology, including protein design, modeling protein dynamics, predicting higher order complexes, and integrating learning with experimental structure determination.

At this inflection point, we hope that the Machine Learning in Structural Biology (MLSB) workshop will help bring community and direction to this rising field. To achieve these goals, this workshop will bring together researchers from a unique and diverse set of domains, including core machine learning, computational biology, experimental structural biology, geometric deep learning, and natural language processing.

Anima Anandkumar · Kyle Cranmer · Mr. Prabhat · Lenka Zdeborová · Atilim Gunes Baydin · Juan Carrasquilla · Emine Kucukbenli · Gilles Louppe · Benjamin Nachman · Brian Nord · Savannah Thais

The "Machine Learning and the Physical Sciences" workshop aims to provide a cutting-edge venue for research at the interface of machine learning (ML) and the physical sciences. This interface spans (1) applications of ML in physical sciences (“ML for physics”) and (2) developments in ML motivated by physical insights (“physics for ML”).

ML methods have had great success in learning complex representations of data that enable novel modeling and data processing approaches in many scientific disciplines. Physical sciences span problems and challenges at all scales in the universe: from finding exoplanets in trillions of sky pixels, to finding ML inspired solutions to the quantum many-body problem, to detecting anomalies in event streams from the Large Hadron Collider, to predicting how extreme weather events will vary with climate change. Tackling a number of associated data-intensive tasks including, but not limited to, segmentation, 3D computer vision, sequence modeling, causal reasoning, generative modeling, and efficient probabilistic inference are critical for furthering scientific discovery. In addition to using ML models for scientific discovery, tools and insights from the physical sciences are increasingly brought to the study of ML models.

By bringing together ML researchers and physical scientists who apply and study ML, we expect …

Ludger Paehler · William Moses · Maria I Gorinova · Assefaw H. Gebremedhin · Jan Hueckelheim · Sri Hari Krishna Narayanan

Differentiable programming allows for automatically computing derivatives of functions within a high-level language. It has become increasingly popular within the machine learning (ML) community: differentiable programming has been used within backpropagation of neural networks, probabilistic programming, and Bayesian inference. Fundamentally, differentiable programming frameworks empower machine learning and its applications: the availability of efficient and composable automatic differentiation (AD) tools has led to advances in optimization, differentiable simulators, engineering, and science.

While AD tools have greatly increased the productivity of ML scientists and practitioners, many problems remain unsolved. Crucially, there is little communication between the broad group of AD users, the programming languages researchers, and the differentiable programming developers, resulting in them working in isolation. We propose a Differentiable Programming workshop as a forum to narrow the gaps between differentiable and probabilistic languages design, efficient automatic differentiation engines and higher-level applications of differentiable programming. We hope this workshop will harness a closer collaboration between language designers and domain scientists by bringing together a diverse part of the differentiable programming community including people working on core automatic differentiation tools, higher level frameworks that rely upon AD (such as probabilistic programming and differentiable simulators), and applications that use differentiable programs to solve scientific …

Arturo Deza · Joshua Peterson · N Apurva Ratan Murty · Tom Griffiths

The goal of the 3rd Shared Visual Representations in Human and Machine Intelligence \textit{(SVRHM)} workshop is to disseminate relevant, parallel findings in the fields of computational neuroscience, psychology, and cognitive science that may inform modern machine learning. In the past few years, machine learning methods---especially deep neural networks---have widely permeated the vision science, cognitive science, and neuroscience communities. As a result, scientific modeling in these fields has greatly benefited, producing a swath of potentially critical new insights into the human mind. Since human performance remains the gold standard for many tasks, these cross-disciplinary insights and analytical tools may point towards solutions to many of the current problems that machine learning researchers face (\textit{e.g.,} adversarial attacks, compression, continual learning, and self-supervised learning). Thus we propose to invite leading cognitive scientists with strong computational backgrounds to disseminate their findings to the machine learning community with the hope of closing the loop by nourishing new ideas and creating cross-disciplinary collaborations. In particular, this year's version of the workshop will have a heavy focus on testing new inductive biases on novel datasets as we work on tasks that go beyond object recognition.

Michael Muller · Plamen P Angelov · Shion Guha · Marina Kogan · Gina Neff · Nuria Oliver · Manuel Rodriguez · Adrian Weller

Human-Centered AI (HCAI) is an emerging discipline that aims to create AI systems that amplify [46,45] and augment [47] human abilities and preserve human control in order to make AI partnerships more productive, enjoyable, and fair [19]. Our workshop aims to bring together researchers and practitioners from the NeurIPS and HCI communities and others with convergent interests in HCAI.With an emphasis on diversity and discussion, we will explore research questions that stem from the increasingly wide-spread usage of machine learning algorithms across all areas of society, with a specific focus on understanding both technical and design requirements for HCAI systems, as well as how to evaluate the efficacy and effects of HCAI systems

Elias Bareinboim · Bernhard Schölkopf · Terrence Sejnowski · Yoshua Bengio · Judea Pearl

Machine Learning has been extremely successful throughout many critical areas, including computer vision, natural language processing, and game-playing. Still, a growing segment of the machine learning community recognizes that there are still fundamental pieces missing from the AI puzzle, among them causal inference.

This recognition comes from the observation that even though causality is a central component found throughout the sciences, engineering, and many other aspects of human cognition, explicit reference to causal relationships is largely missing in current learning systems. This entails a new goal of integrating causal inference and machine learning capabilities into the next generation of intelligent systems, thus paving the way towards higher levels of intelligence and human-centric AI. The synergy goes in both directions; causal inference benefitting from machine learning and the other way around. Current machine learning systems lack the ability to leverage the invariances imprinted by the underlying causal mechanisms towards reasoning about generalizability, explainability, interpretability, and robustness. Current causal inference methods, on the other hand, lack the ability to scale up to high-dimensional settings, where current machine learning systems excel.

The goal of this workshop is to bring together researchers from both camps to initiate principled discussions about the integration of causal …

Xinshuo Weng · Jiachen Li · Nick Rhinehart · Daniel Omeiza · Ali Baheri · Rowan McAllister

We propose a full-day workshop, called “Machine Learning for Autonomous Driving” (ML4AD), as a venue for machine learning (ML) researchers to discuss research problems concerning autonomous driving (AD). Our goal is to promote ML research, and its real-world impact, on self-driving technologies. Full self-driving capability (“Level 5”) is far from solved and extremely complex, beyond the capability of any one institution or company, necessitating larger-scale communication and collaboration, which we believe workshop formats help provide.

We propose a large-attendance talk format of approximately 500 attendees, including (1) a call for papers with poster sessions and spotlight presentations; (2) keynote talks to communicate the state-of-the-art; (3) panel debates to discuss future research directions; (4) a call for challenge to encourage interaction around a common benchmark task; (5) social breaks for newer researchers to network and meet others.

Reinhard Heckel · Paul Hand · Rebecca Willett · christopher metzler · Mahdi Soltanolkotabi

Learning-based methods, and in particular deep neural networks, have emerged as highly successful and universal tools for image and signal recovery and restoration. They achieve state-of-the-art results on tasks ranging from image denoising, image compression, and image reconstruction from few and noisy measurements. They are starting to be used in important imaging technologies, for example in GEs newest computational tomography scanners and in the newest generation of the iPhone.

The field has a range of theoretical and practical questions that remain unanswered, including questions about guarantees, robustness, architectural design, the role of learning, domain-specific applications, and more. This virtual workshop aims at bringing together theoreticians and practitioners in order to chart out recent advances and discuss new directions in deep learning-based approaches for solving inverse problems in the imaging sciences and beyond.

Ashwin Balakrishna · Brijen Thananjeyan · Daniel Brown · Marek Petrik · Melanie Zeilinger · Sylvia Herbert

Control and decision systems are becoming a ubiquitous part of our daily lives, ranging from serving advertisements or recommendations on the internet to controlling autonomous physical systems such as industrial equipment or robots. While these systems have shown the potential for significantly improving quality of life and industrial efficiency, the impact of the decisions made by these systems can also cause significant damages. For example, an online retailer recommending dangerous products to children, a social media platform serving content which polarizes society, or a household robot/autonomous car which collides with surrounding humans can all cause significant direct harm to society. These undesirable behaviors not only can be dangerous, but also lead to significant inefficiencies when deploying learning-based agents in the real world. This motivates developing algorithms for learning-based control which can reason about uncertainty and constraints in the environment to explicitly avoid undesirable behaviors. We believe hosting a discussion on safety in learning-based control at NeurIPS 2021 would have far-reaching societal impacts by connecting researchers from a variety of disciplines including machine learning, control theory, AI safety, operations research, robotics, and formal methods.

Steven Y. Feng · Dor Arad Hudson · Tatsunori Hashimoto · DONGYEOP Kang · Varun Prashant Gangal · Anusha Balakrishnan · Joel Tetreault

Over the past few years, there has been an increased interest in the areas of language and image generation within the community. As generated texts by models like GPT-3 start to sound more fluid and natural, and generated images and videos by GAN models appear more realistic, researchers began focusing on qualitative properties of the generated content such as the ability to control its style and structure, or incorporate information from external sources into the output. Such aims are extremely important to make language and image generation useful for human-machine interaction and other real-world applications including machine co-creativity, entertainment, reducing biases or toxicity, and improving conversational agents and personal assistants.

Achieving these ambitious but important goals introduces challenges not only from NLP and Vision perspectives, but also ones that pertain to Machine Learning as a whole, which has witnessed a growing body of research in relevant domains such as interpretability, disentanglement, robustness, and representation learning. We believe that progress towards the realization of human-like language and image generation may benefit greatly from insights and progress in these and other ML areas.

In this workshop, we propose to bring together researchers from the NLP, Vision, and ML communities to discuss the …

Tom White · Mattie Tesfaldet · Samaneh Azadi · Daphne Ippolito · Lia Coleman · David Ha

Machine co-creativity continues to grow and attract a wider audience to machine learning. Generative models, for example, have enabled new types of media creation across language, images, and music--including recent advances such as CLIP, VQGAN, and DALL·E. This one-day workshop will broadly explore topics in the applications of machine learning to creativity and design, which includes:

State-of-the-art algorithms for the creation of new media. Machine learning models achieving state-of-the-art in traditional media creation tasks (e.g., image, audio, or video synthesis) that are also being used by the artist community will be showcased.

Artist accessibility of machine learning models. Researchers building the next generation of machine learning models for media creation will be challenged in understanding the accessibility needs of artists. Artists and Human Computer interaction / User Experience community members will be encouraged to engage in the conversation.

The sociocultural and political impact of these new models. With the increased popularity of generative machine learning models, we are witnessing these models start to impact our everyday surroundings, ranging from racial and gender bias in algorithms and datasets used for media creation to how new media manipulation tools may erode our collective trust in media content.

Artistic applications. We will hear …

Omer Ben-Porat · Nika Haghtalab · Annie Liang · Yishay Mansour · David Parkes

In recent years, machine learning has been called upon to solve increasingly more complex tasks and to regulate many aspects of our social, economic, and technological world. These applications include learning economic policies from data, prediction in financial markets, learning personalize models across population of users, and ranking qualified candidates for admission, hiring, and lending. These tasks take place in a complex social and economic context where the learners and objects of learning are often people or organizations that are impacted by the learning algorithm and, in return, can take actions that influence the learning process. Learning in this context calls for a new vision for machine learning and economics that aligns the incentives and interests of the learners and other parties and is robust to the evolving social and economic needs. This workshop explores a view of machine learning and economics that considers interactions of learning systems with a wide range of social and strategic behaviors. Examples of these problems include: multi-agent learning systems, welfare-aware machine learning, learning from strategic and economic data, learning as a behavioral model, and causal inference for learning impact of strategic choices.

Pieter Abbeel · Chelsea Finn · David Silver · Matthew Taylor · Martha White · Srijita Das · Yuqing Du · Andrew Patterson · Manan Tomar · Olivia Watkins

In recent years, the use of deep neural networks as function approximators has enabled researchers to extend reinforcement learning techniques to solve increasingly complex control tasks. The emerging field of deep reinforcement learning has led to remarkable empirical results in rich and varied domains like robotics, strategy games, and multiagent interactions. This workshop will bring together researchers working at the intersection of deep learning and reinforcement learning, and it will help interested researchers outside of the field gain perspective about the current state of the art and potential directions for future contributions.

Benoit Steiner · Jonathan Raiman · Martin Maas · Azade Nova · Mimee Xu · Anna Goldie

ML for Systems is an emerging research area that has shown promising results in the past few years. Recent work has shown that ML can be used to replace heuristics, solve complex optimization problems, and improve modeling and forecasting when applied in the context of computer systems.

As an emerging area, ML for Systems is still in the process of defining the common problems, frameworks and approaches to solving its problems, which requires venues that bring together researchers and practitioners from both the systems and machine learning communities. Past iterations of the workshops focused on providing such a venue and broke new ground on a broad range of emerging new directions in ML for Systems. We want to carry this momentum forward by encouraging the community to explore areas that have previously received less attention. Specifically, the workshop commits to highlighting works that also optimize for security and privacy, as opposed to metrics like speed and memory and use ML to optimize for energy usage and carbon impact. Additionally, this year we will encourage the development of shared methodology, tools, and frameworks.

For the first time since the inception of the workshop, we will organize a competition. This competition will …

Shiori Sagawa · Pang Wei Koh · Fanny Yang · Hongseok Namkoong · Jiashi Feng · Kate Saenko · Percy Liang · Sarah Bird · Sergey Levine

Distribution shifts---where a model is deployed on a data distribution different from what it was trained on---pose significant robustness challenges in real-world ML applications. Such shifts are often unavoidable in the wild and have been shown to substantially degrade model performance in applications such as biomedicine, wildlife conservation, sustainable development, robotics, education, and criminal justice. For example, models can systematically fail when tested on patients from different hospitals or people from different demographics. Despite the ubiquity of distribution shifts in ML applications, work on these types of real-world shifts is currently underrepresented in the ML research community, with prior work generally focusing instead on synthetic shifts. However, recent work has shown that models that are robust to one kind of shift need not be robust to another, underscoring the importance and urgency of studying the types of distribution shifts that arise in real-world ML deployments. With this workshop, we aim to facilitate deeper exchanges between domain experts in various ML application areas and more methods-oriented researchers, and ground the development of methods for characterizing and mitigating distribution shifts in real-world application contexts.

Ritwik Gupta · Esther Rolf · Robin Murphy · Eric Heim

Natural disasters are one of the oldest threats to both individuals and the societies they co-exist in. As a result, humanity has ceaselessly sought way to provide assistance to people in need after disasters have struck. Further, natural disasters are but a single, extreme example of the many possible humanitarian crises. Disease outbreak, famine, and oppression against disadvantaged groups can pose even greater dangers to people that have less obvious solutions. In this proposed workshop, we seek to bring together the Artificial Intelligence (AI) and Humanitarian Assistance and Disaster Response (HADR) communities in order to bring AI to bear on real-world humanitarian crises. Through this workshop, we intend to establish meaningful dialogue between the communities.

By the end of the workshop, the NeurIPS research community can come to understand the practical challenges of aiding those who are experiencing crises, while the HADR community can understand the landscape that is the state of art and practice in AI. Through this, we seek to begin establishing a pipeline of transitioning the research created by the NeurIPS community to real-world humanitarian issues.

Yu-Xiang Wang · Borja Balle · Giovanni Cherubin · Kamalika Chaudhuri · Antti Honkela · Jonathan Lebensold · Casey Meehan · Mi Jung Park · Adrian Weller · Yuqing Zhu

The goal of our workshop is to bring together privacy experts working in academia and industry to discuss the present and future of technologies that enable machine learning with privacy. The workshop will focus on the technical aspects of privacy research and deployment with invited and contributed talks by distinguished researchers in the area. By design, the workshop should serve as a meeting point for regular NeurIPS attendees interested/working on privacy to meet other parts of the privacy community (security researchers, legal scholars, industry practitioners). The focus this year will include emerging problems such as machine unlearning, privacy-fairness tradeoffs and legal challenges in recent deployments of differential privacy (e.g. that of the US Census Bureau). We will conclude the workshop with a panel discussion titled: “Machine Learning and Privacy in Practice: Challenges, Pitfalls and Opportunities”. A diverse set of panelists will address the challenges faced applying these technologies to the real world. The programme of the workshop will emphasize the diversity of points of view on the problem of privacy. We will also ensure that there is ample time for discussions that encourage networking between researchers, which should result in mutually beneficial new long-term collaborations.

Yarin Gal · Yingzhen Li · Sebastian Farquhar · Christos Louizos · Eric Nalisnick · Andrew Gordon Wilson · Zoubin Ghahramani · Kevin Murphy · Max Welling

To deploy deep learning in the wild responsibly, we must know when models are making unsubstantiated guesses. The field of Bayesian Deep Learning (BDL) has been a focal point in the ML community for the development of such tools. Big strides have been made in BDL in recent years, with the field making an impact outside of the ML community, in fields including astronomy, medical imaging, physical sciences, and many others. But the field of BDL itself is facing an evaluation crisis: most BDL papers evaluate uncertainty estimation quality of new methods on MNIST and CIFAR alone, ignoring needs of real world applications which use BDL. Therefore, apart from discussing latest advances in BDL methodologies, a particular focus of this year’s programme is on the reliability of BDL techniques in downstream tasks. This focus is reflected through invited talks from practitioners in other fields and by working together with the two NeurIPS challenges in BDL — the Approximate Inference in Bayesian Deep Learning Challenge and the Shifts Challenge on Robustness and Uncertainty under Real-World Distributional Shift — advertising work done in applications including autonomous driving, medical, space, and more. We hope that the mainstream BDL community will adopt real world …

Breandan Considine · Disha Shrivastava · David Yu-Tung Hui · Chin-Wei Huang · Shawn Tan · Xujie Si · Prakash Panangaden · Guy Van den Broeck · Daniel Tarlow

Neural information processing systems have benefited tremendously from the availability of programming languages and frameworks for automatic differentiation (AD). Not only do NeurIPS benefit from programming languages for automatic inference but can also be considered as a language in their own right, consisting of differentiable and stochastic primitives. Combined with neural language models, these systems are increasingly capable of generating symbolic programs a human programmer might write in a high-level language. Developing neurosymbolic systems for automatic program synthesis requires insights from both statistical learning and programming languages.

AIPLANS invites all researchers working towards the same purpose in these two communities to build on common ground. Our workshop is designed to be as inclusive as possible towards researchers engaged in building programming languages and neurosymbolic systems.

Luca Celotti · Kelly Buchanan · Jorge Ortiz · Patrick Kidger · Stefano Massaroli · Michael Poli · Lily Hu · Ermal Rrapaj · Martin Magill · Thorsteinn Jonsson · Animesh Garg · Murtadha Aldeer

Deep learning can solve differential equations, and differential equations can model deep learning. What have we learned and where to next?

The focus of this workshop is on the interplay between deep learning (DL) and differential equations (DEs). In recent years, there has been a rapid increase of machine learning applications in computational sciences, with some of the most impressive results at the interface of DL and DEs. These successes have widespread implications, as DEs are among the most well-understood tools for the mathematical analysis of scientific knowledge, and they are fundamental building blocks for mathematical models in engineering, finance, and the natural sciences. This relationship is mutually beneficial. DL techniques have been used in a variety of ways to dramatically enhance the effectiveness of DE solvers and computer simulations. Conversely, DEs have also been used as mathematical models of the neural architectures and training algorithms arising in DL.

This workshop will aim to bring together researchers from each discipline to encourage intellectual exchanges and cultivate relationships between the two communities. The scope of the workshop will include important topics at the intersection of DL and DEs.

Elizabeth Wood · Adji Bousso Dieng · Aleksandrina Goeva · Anshul Kundaje · Barbara Engelhardt · Chang Liu · David Van Valen · Debora Marks · Edward Boyden · Eli N Weinstein · Lorin Crawford · Mor Nitzan · Romain Lopez · Tamara Broderick · Ray Jones · Wouter Boomsma · Yixin Wang

One of the greatest challenges facing biologists and the statisticians that work with them is the goal of representation learning to discover and define appropriate representation of data in order to perform complex, multi-scale machine learning tasks. This workshop is designed to bring together trainee and expert machine learning scientists with those in the very forefront of biological research for this purpose. Our full-day workshop will advance the joint project of the CS and biology communities with the goal of "Learning Meaningful Representations of Life" (LMRL), emphasizing interpretable representation learning of structure and principle.

We will organize around the theme "From Genomes to Phenotype, and Back Again": an extension of a long-standing effort in the biological sciences to assign biochemical and cellular functions to the millions of as-yet uncharacterized gene products discovered by genome sequencing. ML methods to predict phenotype from genotype are rapidly advancing and starting to achieve widespread success. At the same time, large scale gene synthesis and genome editing technologies have rapidly matured, and become the foundation for new scientific insight as well as biomedical and industrial advances. ML-based methods have the potential to accelerate and extend these technologies' application, by providing tools for solving the key …

Thomas Gilbert · Stuart J Russell · Tom O Zick · Aaron Snoswell · Michael Dennis

Sponsored by the Center for Human-Compatible AI at UC Berkeley, and with support from the Simons Institute and the Center for Long-Term Cybersecurity, we are convening a cross-disciplinary group of researchers to examine the near-term policy concerns of Reinforcement Learning (RL). RL is a rapidly growing branch of AI research, with the capacity to learn to exploit our dynamic behavior in real time. From YouTube’s recommendation algorithm to post-surgery opioid prescriptions, RL algorithms are poised to permeate our daily lives. The ability of the RL system to tease out behavioral responses, and the human experimentation inherent to its learning, motivate a range of crucial policy questions about RL’s societal implications that are distinct from those addressed in the literature on other branches of Machine Learning (ML).

DOU QI · Marleen de Bruijne · Ben Glocker · Aasa Feragen · Herve Lombaert · Ipek Oguz · Jonas Teuwen · Islem Rekik · Darko Stern · Xiaoxiao Li

“Medical Imaging meets NeurIPS” aims to bring researchers together from the medical imaging and machine learning communities to create a cutting-edge venue for discussing the major challenges in the field and opportunities for research and novel applications. The proposed event will be the continuation of a successful workshop organized in NeurIPS 2017, 2018, 2019, and 2020. It will feature a series of invited speakers from academia, medical sciences and industry to present latest works in progress and give an overview of recent technological advances and remaining major challenges. The workshop website is https://sites.google.com/view/med-neurips-2021.

Diana Cai · Sameer Deshpande · Michael Hughes · Tamara Broderick · Trevor Campbell · Nick Foti · Barbara Engelhardt · Sinead Williamson

Probabilistic modeling is a foundation of modern data analysis -- due in part to the flexibility and interpretability of these methods -- and has been applied to numerous application domains, such as the biological sciences, social and political sciences, engineering, and health care. However, any probabilistic model relies on assumptions that are necessarily a simplification of complex real-life processes; thus, any such model is inevitably misspecified in practice. In addition, as data set sizes grow and probabilistic models become more complex, applying a probabilistic modeling analysis often relies on algorithmic approximations, such as approximate Bayesian inference, numerical approximations, or data summarization methods. Thus in many cases, approximations used for efficient computation lead to fitting a misspecified model by design (e.g., variational inference). Importantly, in some cases, this misspecification leads to useful model inferences, but in others it may lead to misleading and potentially harmful inferences that may then be used for important downstream tasks for, e.g., making scientific inferences or policy decisions.

The goal of the workshop is to bring together researchers focused on methods, applications, and theory to outline some of the core problems in specifying and applying probabilistic models in modern data contexts along with current state-of-the-art solutions. …

Roberto Capobianco · Biagio La Rosa · Leilani Gilpin · Wen Sun · Alice Xiang · Alexander Feldman

Recently, artificial intelligence (AI) has seen the explosion of deep learning (DL) models, which are able to reach super-human performance in several tasks. These improvements, however, come at a cost: DL models are ``black boxes’’, where one feeds an input and obtains an output without understanding the motivations behind that prediction or decision. The eXplainable AI (XAI) field tries to address such problems by proposing methods that explain the behavior of these networks.
In this workshop, we narrow the XAI focus to the specific case in which developers or researchers need to debug their models and diagnose system behaviors. This type of user typically has substantial knowledge about the models themselves but needs to validate, debug, and improve them.

This is an important topic for several reasons. For example, domains like healthcare and justice require that experts are able to validate DL models before deployment. Despite this, the development of novel deep learning models is dominated by trial-and-error phases guided by aggregated metrics and old benchmarks that tell us very little about the skills and utility of these models. Moreover, the debugging phase is a nightmare for practitioners too.

Another community that is working on tracking and debugging machine learning …

Manfred Díaz · Hiroki Furuta · Elise van der Pol · Lisa Lee · Shixiang (Shane) Gu · Pablo Samuel Castro · Simon Du · Marc Bellemare · Sergey Levine

This workshop builds connections between different areas of RL centered around the understanding of algorithms and their context. We are interested in questions such as, but not limited to: (i) How can we gauge the complexity of an RL problem?, (ii) Which classes of algorithms can tackle which classes of problems?, and (iii) How can we develop practically applicable guidelines for formulating RL tasks that are tractable to solve? We expect submissions that address these and other related questions through an ecological and data-centric view, pushing forward the limits of our comprehension of the RL problem.

Natasha Jaques · Edward Hughes · Jakob Foerster · Noam Brown · Kalesha Bullard · Charlotte Smith

The human ability to cooperate in a wide range of contexts is a key ingredient in the success of our species. Problems of cooperation—in which agents seek ways to jointly improve their welfare—are ubiquitous and important. They can be found at every scale, from the daily routines of highway driving, communicating in shared language and work collaborations, to the global challenges of climate change, pandemic preparedness and international trade. With AI agents playing an ever greater role in our lives, we must endow them with similar abilities. In particular they must understand the behaviors of others, find common ground by which to communicate with them, make credible commitments, and establish institutions which promote cooperative behavior. By construction, the goal of Cooperative AI is interdisciplinary in nature. Therefore, our workshop will bring together scholars from diverse backgrounds including reinforcement learning (and inverse RL), multi-agent systems, human-AI interaction, game theory, mechanism design, social choice, fairness, cognitive science, language learning, and interpretability. This year we will organize the workshop along two axes. First, we will discuss how to incentivize cooperation in AI systems, developing algorithms that can act effectively in general-sum settings, and which encourage others to cooperate. The second focus is on …

Julia Vogt · Ece Ozkan · Sonali Parbhoo · Melanie F. Pradier · Patrick Schwab · Shengpu Tang · Mario Wieser · Jiayu Yao

Machine learning (ML) methods often achieve superhuman performance levels, however, most existing machine learning research in the medical domain is stalled at the research paper level and is not implemented into daily clinical practice. To achieve the overarching goal of realizing the promise of cutting-edge ML techniques and bring this exciting research to fruition, we must bridge the gap between research and clinics. In this workshop, we aim to bring together ML researchers and clinicians to discuss the challenges and potential solutions on how to enable the use of state-of-the-art ML techniques in the daily clinical practice and ultimately improve healthcare by trying to answer questions like: what are the procedures that bring humans-in-the-loop for auditing ML systems for healthcare? Are the proposed ML methods robust to changes in population, distribution shifts, or other types of biases? What should the ML methods/systems fulfill to successfully deploy them in the clinics? What are failure modes of ML models for healthcare? How can we develop methods for improved interpretability of ML predictions in the context of healthcare? And many others. We will further discuss translational and implementational aspects and talk about challenges and lessons learned from integrating an ML system into clinical …

Biplav Srivastava · Anita Nikolich · Huan Liu · Natwar Modani · Tarmo Koppel

Credible Elections are vital to democracy. How can AI help?
Artificial intelligence and machine learning have transformed modern society. It also impacts how elections are conducted in democracies, with mixed outcomes. For example, digital marketing campaigns have enabled candidates to connect with voters at scale and communicate remotely during COVID-19, but there remains
widespread concern about the spread of election disinformation as the result of AI-enabled bots and aggressive strategies. In response, we propose a workshop that will examine the challenges of credible elections globally in an academic setting with apolitical discussion of significant issues. The speakers, panels and reviewed papers
will discuss current and best practices in holding elections, tools available for candidates and experience of voters. They will highlight gaps and experience from AI-based interventions. To ground the discussion, the invited speakers and panelists are drawn from three geographies as illustrative: US - representing one of the
world’s oldest democracies; India - representing the largest democracy in the world; and Estonia - representing a country using digital technologies extensively during elections and as a facet of daily life.

Rumi Chunara · Daniel Lizotte · Laura Rosella · Esra Suel · Marie Charpignon

Public health and population health refer to the study of daily life factors, prevention efforts, and their effects on the health of populations. Building on the success of our first workshop at NeurIPS 2020, this workshop will focus on data and algorithms related to the non-medical conditions that shape our health including structural, lifestyle, policy, social, behavior and environmental factors. Data that is traditionally used in machine learning and health problems are really about our interactions with the health care system, and this workshop aims to balance this with machine learning work using data on non-medical conditions. This year we also broaden and integrate discussion on machine learning in the closely related area of urban planning, which is concerned with the technical and political processes regarding the development and design of land use. This includes the built environment, including air, water, and the infrastructure passing into and out of urban areas, such as transportation, communications, distribution networks, sanitation, protection and use of the environment, including their accessibility and equity. We make this extension this year due to the fundamentally and increasingly relevant intertwined nature of human health and environment, as well as the recent emergence of more modern data analytic …

Paula Rodriguez Diaz · Konstantin Klemmer · Sally Simone Fobi · Oluwafemi Azeez · Niveditha Kalavakonda · Aya Salama · Tejumade Afonja

While some nations are regaining normality after almost a year and a half since the COVID-19 pandemic struck as a global challenge –schools are reopening, face mask mandates are being dropped, economies are recovering, etc ... –, other nations, especially developing ones, are amid their most critical scenarios in terms of health, economy, and education. Although this ongoing pandemic has been a global challenge, it has had local consequences and necessities in developing regions that are not necessarily shared globally. This situation makes us question how global challenges such as access to vaccines, good internet connectivity, sanitation, water, as well as poverty, climate change, environmental degradation, amongst others, have had and will have local consequences in developing nations, and how machine learning approaches can assist in designing solutions that take into account these local characteristics.

Past iterations of the ML4D workshop have explored: the development of smart solutions for intractable problems, the challenges and risks that arise when deploying machine learning models in developing regions, and building machine learning models with improved resilience. This year, we call on our community to identify and understand the particular challenges and consequences that global issues may result in developing regions while proposing machine …

José Miguel Hernández-Lobato · Yingzhen Li · Yichuan Zhang · Cheng Zhang · Austin Tripp · Weiwei Pan · Oren Rippel

Deep generative models (DGMs) have become an important research branch in deep learning, including a broad family of methods such as variational autoencoders, generative adversarial networks, normalizing flows, energy based models and autoregressive models. Many of these methods have been shown to achieve state-of-the-art results in the generation of synthetic data of different types such as text, speech, images, music, molecules, etc. However, besides just generating synthetic data, DGMs are of particular relevance in many practical downstream applications. A few examples are imputation and acquisition of missing data, anomaly detection, data denoising, compressed sensing, data compression, image super-resolution, molecule optimization, interpretation of machine learning methods, identifying causal structures in data, generation of molecular structures, etc. However, at present, there seems to be a disconnection between researchers working on new DGM-based methods and researchers applying such methods to practical problems (like the ones mentioned above). This workshop aims to fill in this gap by bringing the two aforementioned communities together.

Xiao-Yang Liu · Qibin Zhao · Ivan Oseledets · Yufei Ding · Guillaume Rabusseau · Jean Kossaifi · Khadijeh Najafi · Anwar Walid · Andrzej Cichocki · Masashi Sugiyama

Quantum tensor networks in machine learning (QTNML) are envisioned to have great potential to advance AI technologies. Quantum machine learning [1][2] promises quantum advantages (potentially exponential speedups in training [3], quadratic improvements in learning efficiency [4]) over classical machine learning, while tensor networks provide powerful simulations of quantum machine learning algorithms on classical computers. As a rapidly growing interdisciplinary area, QTNML may serve as an amplifier for computational intelligence, a transformer for machine learning innovations, and a propeller for AI industrialization.

Tensor networks [5], a contracted network of factor core tensors, have arisen independently in several areas of science and engineering. Such networks appear in the description of physical processes and an accompanying collection of numerical techniques have elevated the use of quantum tensor networks into a variational model of machine learning. These techniques have recently proven ripe to apply to many traditional problems faced in deep learning [6,7,8]. More potential QTNML technologies are rapidly emerging, such as approximating probability functions, and probabilistic graphical models [9,10,11,12]. Quantum algorithms are typically described by quantum circuits (quantum computational networks) that are indeed a class of tensor networks, creating an evident interplay between classical tensor network contraction algorithms and executing tensor contractions on …

Joshua T Vogelstein · Weiwei Yang · Soledad Villar · Zenna Tavares · Johnathan Flowers · Onyema Osuagwu · Weishung Liu

Out-of-distribution (OOD) generalization and adaptation is a key challenge the field of machine learning (ML) must overcome to achieve its eventual aims associated with artificial intelligence (AI). Humans, and possibly non-human animals, exhibit OOD capabilities far beyond modern ML solutions. It is natural, therefore, to wonder (i) what properties of natural intelligence enable OOD learning (for example, is a cortex required, can human organoids achieve OOD capabilities, etc.), and (ii) what research programs can most effectively identify and extract those properties to inform future ML solutions? Although many workshops have focused on aspects of (i), it is through the additional focus of (ii) that this workshop will best foster collaborations and research to advance the capabilities of ML.


This workshop is designed to bring together the foremost leaders in natural and artificial intelligence, along with the known and unknown upcoming stars in the fields, to answer the above two questions. Our hope is that at the end of the workshop, we will have a head start on identifying a vision that will (1) formalize hypothetical learning mechanisms that enable OOD generalization and adaptation, and characterize their capabilities and limitations; (2) propose experiments to measure, manipulate, and model biological systems to …

Daniel Reichman · Joshua Peterson · Kiran Tomlinson · Annie Liang · Tom Griffiths

Understanding human decision-making is a key focus of behavioral economics, psychology, and neuroscience with far-reaching applications, from public policy to industry. Recently, advances in machine learning have resulted in better predictive models of human decisions and even enabled new theories of decision-making. On the other hand, machine learning systems are increasingly being used to make decisions that affect people, including hiring, resource allocation, and paroles. These lines of work are deeply interconnected: learning what people value is crucial both to predict their own decisions and to make good decisions for them. In this workshop, we will bring together experts from the wide array of disciplines concerned with human and machine decisions to exchange ideas around three main focus areas: (1) using theories of decision-making to improve machine learning models, (2) using machine learning to inform theories of decision-making, and (3) improving the interaction between people and decision-making AIs.

Maria João Sousa · Hari Prasanna Das · Sally Simone Fobi · Jan Drgona · Tegan Maharaj · Yoshua Bengio

The focus of this workshop is the use of machine learning to help address climate change, encompassing mitigation efforts (reducing greenhouse gas emissions), adaptation measures (preparing for unavoidable consequences), and climate science (our understanding of the climate and future climate predictions). The scope of the workshop includes climate-relevant applications of machine learning to the power sector, buildings and transportation infrastructure, agriculture and land use, extreme event prediction, disaster response, climate policy, and climate finance. The goals of the workshop are: (1) to showcase high-impact applications of ML to climate change mitigation, adaptation, and climate science, (2) to showcase novel and interesting problem settings and challenges for ML techniques, (3) to encourage fruitful collaboration between the ML community and a diverse set of researchers and practitioners from climate change-related fields, and (4) to promote dialogue with decision-makers in the private and public sectors to ensure that the work presented leads to responsible and meaningful deployment.

Pengtao Xie · Ishan Misra · Pulkit Agrawal · Abdelrahman Mohamed · Shentong Mo · Youwei Liang · Jeannette Bohg · Kristina N Toutanova

Self-supervised learning (SSL) is an unsupervised approach for representation learning without relying on human-provided labels. It creates auxiliary tasks on unlabeled input data and learns representations by solving these tasks. SSL has demonstrated great success on images (e.g., MoCo [19], PIRL [9], SimCLR [20]) and texts (e.g., BERT [21]) and has shown promising results in other data modalities, including graphs, time-series, audio, etc. On a wide variety of tasks, SSL without using human-provided labels achieves performance that is close to fully supervised approaches. The existing SSL research mostly focuses on improving the empirical performance without a theoretical foundation. While the proposed SSL approaches are empirically effective, theoretically why they perform well is not clear. For example, why certain auxiliary tasks in SSL perform better than others? How many unlabeled data examples are needed by SSL to learn a good representation? How is the performance of SSL affected by neural architectures? In this workshop, we aim to bridge this gap between theory and practice. We bring together SSL-interested researchers from various domains to discuss the theoretical foundations of empirically well-performing SSL approaches and how the theoretical insights can further improve SSL’s empirical performance. Different from previous SSL-related workshops which focus on …

Angela Schoellig · Animesh Garg · Somil Bansal · SiQi Zhou · Melissa Greeff · Lukas Brunke

Embodied systems are playing an increasingly important role in our lives. Examples include, but are not limited to, autonomous driving, drone delivery, and service robots. In real-world deployments, the systems are required to safely learn and operate under the various sources of uncertainties. As noted in the “Roadmap for US Robotics (2020)”, safe learning and adaptation is a key aspect of next-generation robotics. Learning is ingrained in all components of the robotics software stack including perception, planning, and control. While the safety and robustness of these components have been identified as critical aspects for real-world deployments, open issues and challenges are often discussed separately in the respective communities. In this workshop, we aim to bring together researchers from machine learning, computer vision, robotics, and control to facilitate interdisciplinary discussions on the topic of deployable decision making in embodied systems. Our workshop will focus on two discussion themes: (i) safe learning and decision making in uncertain and unstructured environments and (ii) efficient transfer learning for deployable embodied systems. To facilitate discussions and solicit participation from a broad audience, we plan to have a set of interactive lecture-style presentations, focused discussion panels, and a poster session with contributed paper presentations. By bringing …

Yahav Bechavod · Hoda Heidari · Eric Mazumdar · Celestine Mendler-Dünner · Tijana Zrnic

Classical treatments of machine learning rely on the assumption that the data, after deployment, resembles the data the model was trained on. However, as machine learning models are increasingly used to make consequential decisions about people, individuals often react strategically to the deployed model. These strategic behaviors---which effectively invalidate the predictive models---have opened up new avenues of research and added new challenges to the deployment of machine learning algorithms in the real world.

Different aspects of strategic behavior have been studied by several communities both within and outside of machine learning. For example, the growing literature on strategic classification studies algorithms for finding strategy-robust decision rules, as well as the properties of such rules. Behavioral economics aims to understand and model people’s strategic responses. Recent works on learning in games study optimization algorithms for finding meaningful equilibria and solution concepts in competitive environments.

This workshop aims to create a dialogue between these different communities, all studying aspects of decision-making and learning with strategic feedback. The goal is to identify common points of interest and open problems in the different subareas, as well as to encourage cross-disciplinary collaboration.

Katy Haynes · Ziad Obermeyer · Emma Pierson · Marzyeh Ghassemi · Matthew Lungren · Sendhil Mullainathan · Matthew McDermott

This workshop will launch a new platform for open medical imaging datasets. Labeled with ground-truth outcomes curated around a set of unsolved medical problems, these data will deepen ways in which ML can contribute to health and raise a new set of technical challenges.

Alex Bewley · Masha Itkina · Hamidreza Kasaei · Jens Kober · Nathan Lambert · Julien PEREZ · Ransalu Senanayake · Vincent Vanhoucke · Markus Wulfmeier · Igor Gilitschenski

Applying machine learning to real-world systems such as robots has been an important part of the NeurIPS community in past years. Progress in machine learning has enabled robots to demonstrate strong performance in helping humans in some household and care-taking tasks, manufacturing, logistics, transportation, and many other unstructured and human-centric environments. While these results are promising, access to high-quality, task-relevant data remains one of the largest bottlenecks for successful deployment of such technologies in the real world.

Methods to generate, re-use, and integrate more sources of valuable data, such as lifelong learning, transfer, and continuous improvement could unlock the next steps of performance. However, accessing these data sources comes with fundamental challenges, which include safety, stability, and the daunting issue of providing supervision for learning while the robot is in operation. Today, unique new opportunities are presenting themselves in this quest for robust, continuous learning: large-scale, self-supervised and multimodal approaches to learning are showing and often exceeding state-of-the-art supervised learning approaches; reinforcement and imitation learning are becoming more stable and data-efficient in real-world settings; new approaches combining strong, principled safety and stability guarantees with the expressive power of machine learning are emerging.

This workshop aims to discuss how these emerging …

Krishna Murthy Jatavallabhula · Rika Antonova · Kevin Smith · Hsiao-Yu Tung · Florian Shkurti · Jeannette Bohg · Josh Tenenbaum

Much progress has been made on end-to-end learning for physical understanding and reasoning. If successful, understanding and reasoning about the physical world promises far-reaching applications in robotics, machine vision, and the physical sciences. Despite this recent progress, our best artificial systems pale in comparison to the flexibility and generalization of human physical reasoning.

Neural information processing systems have shown promising empirical results on synthetic datasets, yet do not transfer well when deployed in novel scenarios (including the physical world). If physical understanding and reasoning techniques are to play a broader role in the physical world, they must be able to function across a wide variety of scenarios, including ones that might lie outside the training distribution. How can we design systems that satisfy these criteria?

Our workshop aims to investigate this broad question by bringing together experts from machine learning, the physical sciences, cognitive and developmental psychology, and robotics to investigate how these techniques may one day be employed in the real world. In particular, we aim to investigate the following questions: 1. What forms of inductive biases best enable the development of physical understanding techniques that are applicable to real-world problems? 2. How do we ensure that the outputs …

Andrew Ng · Lora Aroyo · Greg Diamos · Cody Coleman · Vijay Janapa Reddi · Joaquin Vanschoren · Carole-Jean Wu · Sharon Zhou · Lynn He

Data-Centric AI (DCAI) represents the recent transition from focusing on modeling to the underlying data used to train and evaluate models. Increasingly, common model architectures have begun to dominate a wide range of tasks, and predictable scaling rules have emerged. While building and using datasets has been critical to these successes, the endeavor is often artisanal -- painstaking and expensive. The community lacks high productivity and efficient open data engineering tools to make building, maintaining, and evaluating datasets easier, cheaper, and more repeatable. The DCAI movement aims to address this lack of tooling, best practices, and infrastructure for managing data in modern ML systems.

The main objective of this workshop is to cultivate the DCAI community into a vibrant interdisciplinary field that tackles practical data problems. We consider some of those problems to be: data collection/generation, data labeling, data preprocess/augmentation, data quality evaluation, data debt, and data governance. Many of these areas are nascent, and we hope to further their development by knitting them together into a coherent whole. Together we will define the DCAI movement that will shape the future of AI and ML. Please see our call for papers below to take an active role in shaping that …

Pan Lu · Yuhuai Wu · Sean Welleck · Xiaodan Liang · Eric Xing · James McClelland

Mathematical reasoning is a unique aspect of human intelligence and a fundamental building block for scientific and intellectual pursuits. However, learning mathematics is often a challenging human endeavor that relies on expert instructors to create, teach and evaluate mathematical material. From an educational perspective, AI systems that aid in this process offer increased inclusion and accessibility, efficiency, and understanding of mathematics. Moreover, building systems capable of understanding, creating, and using mathematics offers a unique setting for studying reasoning in AI. This workshop will investigate the intersection of mathematics education and AI, including applications to teaching, evaluation, and assisting. Enabling these applications requires not only innovations in math AI research, but also a better understanding of the challenges in real-world education scenarios. Hence, we will bring together a group of experts from a diverse set of backgrounds, institutions, and disciplines to drive progress on these and other real-world education scenarios, and to discuss the promise and challenge of integrating mathematical AI into education.

Rishabh Agarwal · Aviral Kumar · George Tucker · Justin Fu · Nan Jiang · Doina Precup · Aviral Kumar

Offline reinforcement learning (RL) is a re-emerging area of study that aims to learn behaviors using only logged data, such as data from previous experiments or human demonstrations, without further environment interaction. It has the potential to make tremendous progress in a number of real-world decision-making problems where active data collection is expensive (e.g., in robotics, drug discovery, dialogue generation, recommendation systems) or unsafe/dangerous (e.g., healthcare, autonomous driving, or education). Such a paradigm promises to resolve a key challenge to bringing reinforcement learning algorithms out of constrained lab settings to the real world. The first edition of the offline RL workshop, held at NeurIPS 2020, focused on and led to algorithmic development in offline RL. This year we propose to shift the focus from algorithm design to bridging the gap between offline RL research and real-world offline RL. Our aim is to create a space for discussion between researchers and practitioners on topics of importance for enabling offline RL methods in the real world. To that end, we have revised the topics and themes of the workshop, invited new speakers working on application-focused areas, and building on the lively panel discussion last year, we have invited the panelists from last …

Aurelien Bibaut · Maria Dimakopoulou · Nathan Kallus · Xinkun Nie · Masatoshi Uehara · Kelly Zhang

Sequential decision-making problems appear in settings as varied as healthcare, e-commerce, operations management, and policymaking, and depending on the context these can have very varied features that make each problem unique. Problems can involve online learning or offline data, known cost structures or unknown counterfactuals, continuous actions with or without constraints or finite or combinatorial actions, stationary environments or environments with dynamic agents, utilitarian considerations or fairness or equity considerations. More and more, causal inference and discovery and adjacent statistical theories have come to bear on such problems, from the early work on longitudinal causal inference from the last millenium up to recent developments in bandit algorithms and inference, dynamic treatment regimes, both online and offline reinforcement learning, interventions in general causal graphs and discovery thereof, and more. While the interaction between these theories has grown, expertise is spread across many different disciplines, including CS/ML, (bio)statistics, econometrics, ethics/law, and operations research.

The primary purpose of this workshop is to convene both experts, practitioners, and interested young researchers from a wide range of backgrounds to discuss recent developments around causal inference in sequential decision making and the avenues forward on the topic, especially ones that bring together ideas from different fields. …