Workshop: Privacy in Machine Learning (PriML) 2021 Tue 14 Dec 01:20 a.m.
The goal of our workshop is to bring together privacy experts working in academia and industry to discuss the present and future of technologies that enable machine learning with privacy. The workshop will focus on the technical aspects of privacy research and deployment with invited and contributed talks by distinguished researchers in the area. By design, the workshop should serve as a meeting point for regular NeurIPS attendees interested/working on privacy to meet other parts of the privacy community (security researchers, legal scholars, industry practitioners). The focus this year will include emerging problems such as machine unlearning, privacy-fairness tradeoffs and legal challenges in recent deployments of differential privacy (e.g. that of the US Census Bureau). We will conclude the workshop with a panel discussion titled: “Machine Learning and Privacy in Practice: Challenges, Pitfalls and Opportunities”. A diverse set of panelists will address the challenges faced applying these technologies to the real world. The programme of the workshop will emphasize the diversity of points of view on the problem of privacy. We will also ensure that there is ample time for discussions that encourage networking between researchers, which should result in mutually beneficial new long-term collaborations.
Workshop: Bayesian Deep Learning Tue 14 Dec 03:00 a.m.
To deploy deep learning in the wild responsibly, we must know when models are making unsubstantiated guesses. The field of Bayesian Deep Learning (BDL) has been a focal point in the ML community for the development of such tools. Big strides have been made in BDL in recent years, with the field making an impact outside of the ML community, in fields including astronomy, medical imaging, physical sciences, and many others. But the field of BDL itself is facing an evaluation crisis: most BDL papers evaluate uncertainty estimation quality of new methods on MNIST and CIFAR alone, ignoring needs of real world applications which use BDL. Therefore, apart from discussing latest advances in BDL methodologies, a particular focus of this year’s programme is on the reliability of BDL techniques in downstream tasks. This focus is reflected through invited talks from practitioners in other fields and by working together with the two NeurIPS challenges in BDL — the Approximate Inference in Bayesian Deep Learning Challenge and the Shifts Challenge on Robustness and Uncertainty under Real-World Distributional Shift — advertising work done in applications including autonomous driving, medical, space, and more. We hope that the mainstream BDL community will adopt real world benchmarks based on such applications, pushing the field forward beyond MNIST and CIFAR evaluations.
Workshop: The Symbiosis of Deep Learning and Differential Equations Tue 14 Dec 03:45 a.m.
Deep learning can solve differential equations, and differential equations can model deep learning. What have we learned and where to next?
The focus of this workshop is on the interplay between deep learning (DL) and differential equations (DEs). In recent years, there has been a rapid increase of machine learning applications in computational sciences, with some of the most impressive results at the interface of DL and DEs. These successes have widespread implications, as DEs are among the most well-understood tools for the mathematical analysis of scientific knowledge, and they are fundamental building blocks for mathematical models in engineering, finance, and the natural sciences. This relationship is mutually beneficial. DL techniques have been used in a variety of ways to dramatically enhance the effectiveness of DE solvers and computer simulations. Conversely, DEs have also been used as mathematical models of the neural architectures and training algorithms arising in DL.
This workshop will aim to bring together researchers from each discipline to encourage intellectual exchanges and cultivate relationships between the two communities. The scope of the workshop will include important topics at the intersection of DL and DEs.
Workshop: Advances in Programming Languages and Neurosymbolic Systems (AIPLANS) Tue 14 Dec 03:45 a.m.
Neural information processing systems have benefited tremendously from the availability of programming languages and frameworks for automatic differentiation (AD). Not only do NeurIPS benefit from programming languages for automatic inference but can also be considered as a language in their own right, consisting of differentiable and stochastic primitives. Combined with neural language models, these systems are increasingly capable of generating symbolic programs a human programmer might write in a high-level language. Developing neurosymbolic systems for automatic program synthesis requires insights from both statistical learning and programming languages.
AIPLANS invites all researchers working towards the same purpose in these two communities to build on common ground. Our workshop is designed to be as inclusive as possible towards researchers engaged in building programming languages and neurosymbolic systems.
Workshop: Political Economy of Reinforcement Learning Systems (PERLS) Tue 14 Dec 04:00 a.m.
Sponsored by the Center for Human-Compatible AI at UC Berkeley, and with support from the Simons Institute and the Center for Long-Term Cybersecurity, we are convening a cross-disciplinary group of researchers to examine the near-term policy concerns of Reinforcement Learning (RL). RL is a rapidly growing branch of AI research, with the capacity to learn to exploit our dynamic behavior in real time. From YouTube’s recommendation algorithm to post-surgery opioid prescriptions, RL algorithms are poised to permeate our daily lives. The ability of the RL system to tease out behavioral responses, and the human experimentation inherent to its learning, motivate a range of crucial policy questions about RL’s societal implications that are distinct from those addressed in the literature on other branches of Machine Learning (ML).
Workshop: Learning Meaningful Representations of Life (LMRL) Tue 14 Dec 04:00 a.m.
One of the greatest challenges facing biologists and the statisticians that work with them is the goal of representation learning to discover and define appropriate representation of data in order to perform complex, multi-scale machine learning tasks. This workshop is designed to bring together trainee and expert machine learning scientists with those in the very forefront of biological research for this purpose. Our full-day workshop will advance the joint project of the CS and biology communities with the goal of "Learning Meaningful Representations of Life" (LMRL), emphasizing interpretable representation learning of structure and principle.
We will organize around the theme "From Genomes to Phenotype, and Back Again": an extension of a long-standing effort in the biological sciences to assign biochemical and cellular functions to the millions of as-yet uncharacterized gene products discovered by genome sequencing. ML methods to predict phenotype from genotype are rapidly advancing and starting to achieve widespread success. At the same time, large scale gene synthesis and genome editing technologies have rapidly matured, and become the foundation for new scientific insight as well as biomedical and industrial advances. ML-based methods have the potential to accelerate and extend these technologies' application, by providing tools for solving the key problem of going "back again," from a desired phenotype to the genotype necessary to achieve that desired set of observable characteristics. We will focus on this foundational design problem and its application to areas ranging from protein engineering to phylogeny, immunology, vaccine design and next generation therapies.
Generative modeling, semi-supervised learning, optimal experimental design, Bayesian optimization, & many other areas of machine learning have the potential to address the phenotype-to-genotype problem, and we propose to bring together experts in these fields as well as many others.
LMRL will take place on Dec 13, 2021.
Workshop: Medical Imaging meets NeurIPS Tue 14 Dec 04:50 a.m.
“Medical Imaging meets NeurIPS” aims to bring researchers together from the medical imaging and machine learning communities to create a cutting-edge venue for discussing the major challenges in the field and opportunities for research and novel applications. The proposed event will be the continuation of a successful workshop organized in NeurIPS 2017, 2018, 2019, and 2020. It will feature a series of invited speakers from academia, medical sciences and industry to present latest works in progress and give an overview of recent technological advances and remaining major challenges. The workshop website is https://sites.google.com/view/med-neurips-2021.
Workshop: Your Model is Wrong: Robustness and misspecification in probabilistic modeling Tue 14 Dec 04:55 a.m.
Probabilistic modeling is a foundation of modern data analysis -- due in part to the flexibility and interpretability of these methods -- and has been applied to numerous application domains, such as the biological sciences, social and political sciences, engineering, and health care. However, any probabilistic model relies on assumptions that are necessarily a simplification of complex real-life processes; thus, any such model is inevitably misspecified in practice. In addition, as data set sizes grow and probabilistic models become more complex, applying a probabilistic modeling analysis often relies on algorithmic approximations, such as approximate Bayesian inference, numerical approximations, or data summarization methods. Thus in many cases, approximations used for efficient computation lead to fitting a misspecified model by design (e.g., variational inference). Importantly, in some cases, this misspecification leads to useful model inferences, but in others it may lead to misleading and potentially harmful inferences that may then be used for important downstream tasks for, e.g., making scientific inferences or policy decisions.
The goal of the workshop is to bring together researchers focused on methods, applications, and theory to outline some of the core problems in specifying and applying probabilistic models in modern data contexts along with current state-of-the-art solutions. Participants will leave the workshop with (i) exposure to recent advances in the field, (ii) an idea of the current major challenges in the field, and (iii) an introduction to methods meeting these challenges. These goals will be accomplished through a series of invited and contributed talks, poster spotlights, poster sessions, as well as ample time for discussion and live Q&A.
Workshop: Ecological Theory of Reinforcement Learning: How Does Task Design Influence Agent Learning? Tue 14 Dec 05:00 a.m.
This workshop builds connections between different areas of RL centered around the understanding of algorithms and their context. We are interested in questions such as, but not limited to: (i) How can we gauge the complexity of an RL problem?, (ii) Which classes of algorithms can tackle which classes of problems?, and (iii) How can we develop practically applicable guidelines for formulating RL tasks that are tractable to solve? We expect submissions that address these and other related questions through an ecological and data-centric view, pushing forward the limits of our comprehension of the RL problem.
Workshop: eXplainable AI approaches for debugging and diagnosis Tue 14 Dec 05:00 a.m.
Recently, artificial intelligence (AI) has seen the explosion of deep learning (DL) models, which are able to reach super-human performance in several tasks. These improvements, however, come at a cost: DL models are ``black boxes’’, where one feeds an input and obtains an output without understanding the motivations behind that prediction or decision. The eXplainable AI (XAI) field tries to address such problems by proposing methods that explain the behavior of these networks.
In this workshop, we narrow the XAI focus to the specific case in which developers or researchers need to debug their models and diagnose system behaviors. This type of user typically has substantial knowledge about the models themselves but needs to validate, debug, and improve them.
This is an important topic for several reasons. For example, domains like healthcare and justice require that experts are able to validate DL models before deployment. Despite this, the development of novel deep learning models is dominated by trial-and-error phases guided by aggregated metrics and old benchmarks that tell us very little about the skills and utility of these models. Moreover, the debugging phase is a nightmare for practitioners too.
Another community that is working on tracking and debugging machine learning models is the visual analytics one, which proposes systems that help users to understand and interact with machine learning models. In the last years, the usage of methodologies that explain DL models became central in these systems. As a result, the interaction between the XAI and visual analytics communities became more and more important.
The workshop aims at advancing the discourse by collecting novel methods and discussing challenges, issues, and goals around the usage of XAI approaches to debug and improve current deep learning models. In order to achieve this goal, the workshop aims at bringing researchers and practitioners from both fields, strengthening their collaboration.
Join our Slack channel for Live and Offline Q/A with authors and presenters!
Workshop: Cooperative AI Tue 14 Dec 05:20 a.m.
The human ability to cooperate in a wide range of contexts is a key ingredient in the success of our species. Problems of cooperation—in which agents seek ways to jointly improve their welfare—are ubiquitous and important. They can be found at every scale, from the daily routines of highway driving, communicating in shared language and work collaborations, to the global challenges of climate change, pandemic preparedness and international trade. With AI agents playing an ever greater role in our lives, we must endow them with similar abilities. In particular they must understand the behaviors of others, find common ground by which to communicate with them, make credible commitments, and establish institutions which promote cooperative behavior. By construction, the goal of Cooperative AI is interdisciplinary in nature. Therefore, our workshop will bring together scholars from diverse backgrounds including reinforcement learning (and inverse RL), multi-agent systems, human-AI interaction, game theory, mechanism design, social choice, fairness, cognitive science, language learning, and interpretability. This year we will organize the workshop along two axes. First, we will discuss how to incentivize cooperation in AI systems, developing algorithms that can act effectively in general-sum settings, and which encourage others to cooperate. The second focus is on how to implement effective coordination, given that cooperation is already incentivized. For example, we may examine zero-shot coordination, in which AI agents need to coordinate with novel partners at test time. This setting is highly relevant to human-AI coordination, and provides a stepping stone for the community towards full Cooperative AI.
Workshop: Bridging the Gap: from Machine Learning Research to Clinical Practice Tue 14 Dec 05:30 a.m.
Machine learning (ML) methods often achieve superhuman performance levels, however, most existing machine learning research in the medical domain is stalled at the research paper level and is not implemented into daily clinical practice. To achieve the overarching goal of realizing the promise of cutting-edge ML techniques and bring this exciting research to fruition, we must bridge the gap between research and clinics. In this workshop, we aim to bring together ML researchers and clinicians to discuss the challenges and potential solutions on how to enable the use of state-of-the-art ML techniques in the daily clinical practice and ultimately improve healthcare by trying to answer questions like: what are the procedures that bring humans-in-the-loop for auditing ML systems for healthcare? Are the proposed ML methods robust to changes in population, distribution shifts, or other types of biases? What should the ML methods/systems fulfill to successfully deploy them in the clinics? What are failure modes of ML models for healthcare? How can we develop methods for improved interpretability of ML predictions in the context of healthcare? And many others. We will further discuss translational and implementational aspects and talk about challenges and lessons learned from integrating an ML system into clinical workflow.
Workshop: AI for Credible Elections: A Call to Action Tue 14 Dec 05:45 a.m.
Credible Elections are vital to democracy. How can AI help?
Artificial intelligence and machine learning have transformed modern society. It also impacts how elections are conducted in democracies, with mixed outcomes. For example, digital marketing campaigns have enabled candidates to connect with voters at scale and communicate remotely during COVID-19, but there remains
widespread concern about the spread of election disinformation as the result of AI-enabled bots and aggressive strategies. In response, we propose a workshop that will examine the challenges of credible elections globally in an academic setting with apolitical discussion of significant issues. The speakers, panels and reviewed papers
will discuss current and best practices in holding elections, tools available for candidates and experience of voters. They will highlight gaps and experience from AI-based interventions. To ground the discussion, the invited speakers and panelists are drawn from three geographies as illustrative: US - representing one of the
world’s oldest democracies; India - representing the largest democracy in the world; and Estonia - representing a country using digital technologies extensively during elections and as a facet of daily life.
Workshop: Machine Learning in Public Health Tue 14 Dec 05:55 a.m.
Public health and population health refer to the study of daily life factors, prevention efforts, and their effects on the health of populations. Building on the success of our first workshop at NeurIPS 2020, this workshop will focus on data and algorithms related to the non-medical conditions that shape our health including structural, lifestyle, policy, social, behavior and environmental factors. Data that is traditionally used in machine learning and health problems are really about our interactions with the health care system, and this workshop aims to balance this with machine learning work using data on non-medical conditions. This year we also broaden and integrate discussion on machine learning in the closely related area of urban planning, which is concerned with the technical and political processes regarding the development and design of land use. This includes the built environment, including air, water, and the infrastructure passing into and out of urban areas, such as transportation, communications, distribution networks, sanitation, protection and use of the environment, including their accessibility and equity. We make this extension this year due to the fundamentally and increasingly relevant intertwined nature of human health and environment, as well as the recent emergence of more modern data analytic tools in the urban planning realm. Public and population health, and urban planning are at the heart of structural approaches to counteract inequality and build pluralistic futures that improve the health and well-being of populations.
Workshop: Out-of-distribution generalization and adaptation in natural and artificial intelligence Tue 14 Dec 06:00 a.m.
Out-of-distribution (OOD) generalization and adaptation is a key challenge the field of machine learning (ML) must overcome to achieve its eventual aims associated with artificial intelligence (AI). Humans, and possibly non-human animals, exhibit OOD capabilities far beyond modern ML solutions. It is natural, therefore, to wonder (i) what properties of natural intelligence enable OOD learning (for example, is a cortex required, can human organoids achieve OOD capabilities, etc.), and (ii) what research programs can most effectively identify and extract those properties to inform future ML solutions? Although many workshops have focused on aspects of (i), it is through the additional focus of (ii) that this workshop will best foster collaborations and research to advance the capabilities of ML.
This workshop is designed to bring together the foremost leaders in natural and artificial intelligence, along with the known and unknown upcoming stars in the fields, to answer the above two questions. Our hope is that at the end of the workshop, we will have a head start on identifying a vision that will (1) formalize hypothetical learning mechanisms that enable OOD generalization and adaptation, and characterize their capabilities and limitations; (2) propose experiments to measure, manipulate, and model biological systems to inspire, test, and validate such hypotheses; and (3) implement those hypotheses in hardware/software/wetware solutions to close the empirical gap between natural and artificial intelligence capabilities.
Second Workshop on Quantum Tensor Networks in Machine Learning Tue 14 Dec 06:00 a.m.
Quantum tensor networks in machine learning (QTNML) are envisioned to have great potential to advance AI technologies. Quantum machine learning [1][2] promises quantum advantages (potentially exponential speedups in training [3], quadratic improvements in learning efficiency [4]) over classical machine learning, while tensor networks provide powerful simulations of quantum machine learning algorithms on classical computers. As a rapidly growing interdisciplinary area, QTNML may serve as an amplifier for computational intelligence, a transformer for machine learning innovations, and a propeller for AI industrialization.
Tensor networks [5], a contracted network of factor core tensors, have arisen independently in several areas of science and engineering. Such networks appear in the description of physical processes and an accompanying collection of numerical techniques have elevated the use of quantum tensor networks into a variational model of machine learning. These techniques have recently proven ripe to apply to many traditional problems faced in deep learning [6,7,8]. More potential QTNML technologies are rapidly emerging, such as approximating probability functions, and probabilistic graphical models [9,10,11,12]. Quantum algorithms are typically described by quantum circuits (quantum computational networks) that are indeed a class of tensor networks, creating an evident interplay between classical tensor network contraction algorithms and executing tensor contractions on quantum processors. The modern field of quantum enhanced machine learning has started to utilize several tools from tensor network theory to create new quantum models of machine learning and to better understand existing ones. However, the topic of QTNML is relatively young and many open problems are still to be explored.
Workshop: Deep Generative Models and Downstream Applications Tue 14 Dec 06:00 a.m.
Deep generative models (DGMs) have become an important research branch in deep learning, including a broad family of methods such as variational autoencoders, generative adversarial networks, normalizing flows, energy based models and autoregressive models. Many of these methods have been shown to achieve state-of-the-art results in the generation of synthetic data of different types such as text, speech, images, music, molecules, etc. However, besides just generating synthetic data, DGMs are of particular relevance in many practical downstream applications. A few examples are imputation and acquisition of missing data, anomaly detection, data denoising, compressed sensing, data compression, image super-resolution, molecule optimization, interpretation of machine learning methods, identifying causal structures in data, generation of molecular structures, etc. However, at present, there seems to be a disconnection between researchers working on new DGM-based methods and researchers applying such methods to practical problems (like the ones mentioned above). This workshop aims to fill in this gap by bringing the two aforementioned communities together.
Workshop: Machine Learning for the Developing World (ML4D): Global Challenges Tue 14 Dec 06:00 a.m.
While some nations are regaining normality after almost a year and a half since the COVID-19 pandemic struck as a global challenge –schools are reopening, face mask mandates are being dropped, economies are recovering, etc ... –, other nations, especially developing ones, are amid their most critical scenarios in terms of health, economy, and education. Although this ongoing pandemic has been a global challenge, it has had local consequences and necessities in developing regions that are not necessarily shared globally. This situation makes us question how global challenges such as access to vaccines, good internet connectivity, sanitation, water, as well as poverty, climate change, environmental degradation, amongst others, have had and will have local consequences in developing nations, and how machine learning approaches can assist in designing solutions that take into account these local characteristics.
Past iterations of the ML4D workshop have explored: the development of smart solutions for intractable problems, the challenges and risks that arise when deploying machine learning models in developing regions, and building machine learning models with improved resilience. This year, we call on our community to identify and understand the particular challenges and consequences that global issues may result in developing regions while proposing machine learning-based solutions for tackling them.
Additionally, as part of COVID-19's global and local consequences, we will dedicate part of the workshop to understand the challenges in machine learning research in developing regions since the pandemic started. We aim to support and incentivize ML4D research while considering current challenges by including new sections such as a guidance and mentorship session for project proposals and a round table session focused on understanding the constraints faced by researchers in our community.
Workshop on Human and Machine Decisions Tue 14 Dec 06:05 a.m.
Understanding human decision-making is a key focus of behavioral economics, psychology, and neuroscience with far-reaching applications, from public policy to industry. Recently, advances in machine learning have resulted in better predictive models of human decisions and even enabled new theories of decision-making. On the other hand, machine learning systems are increasingly being used to make decisions that affect people, including hiring, resource allocation, and paroles. These lines of work are deeply interconnected: learning what people value is crucial both to predict their own decisions and to make good decisions for them. In this workshop, we will bring together experts from the wide array of disciplines concerned with human and machine decisions to exchange ideas around three main focus areas: (1) using theories of decision-making to improve machine learning models, (2) using machine learning to inform theories of decision-making, and (3) improving the interaction between people and decision-making AIs.
Workshop: Tackling Climate Change with Machine Learning Tue 14 Dec 06:15 a.m.
The focus of this workshop is the use of machine learning to help address climate change, encompassing mitigation efforts (reducing greenhouse gas emissions), adaptation measures (preparing for unavoidable consequences), and climate science (our understanding of the climate and future climate predictions). The scope of the workshop includes climate-relevant applications of machine learning to the power sector, buildings and transportation infrastructure, agriculture and land use, extreme event prediction, disaster response, climate policy, and climate finance. The goals of the workshop are: (1) to showcase high-impact applications of ML to climate change mitigation, adaptation, and climate science, (2) to showcase novel and interesting problem settings and challenges for ML techniques, (3) to encourage fruitful collaboration between the ML community and a diverse set of researchers and practitioners from climate change-related fields, and (4) to promote dialogue with decision-makers in the private and public sectors to ensure that the work presented leads to responsible and meaningful deployment.
Workshop: Learning and Decision-Making with Strategic Feedback (StratML) Tue 14 Dec 07:00 a.m.
Classical treatments of machine learning rely on the assumption that the data, after deployment, resembles the data the model was trained on. However, as machine learning models are increasingly used to make consequential decisions about people, individuals often react strategically to the deployed model. These strategic behaviors---which effectively invalidate the predictive models---have opened up new avenues of research and added new challenges to the deployment of machine learning algorithms in the real world.
Different aspects of strategic behavior have been studied by several communities both within and outside of machine learning. For example, the growing literature on strategic classification studies algorithms for finding strategy-robust decision rules, as well as the properties of such rules. Behavioral economics aims to understand and model people’s strategic responses. Recent works on learning in games study optimization algorithms for finding meaningful equilibria and solution concepts in competitive environments.
This workshop aims to create a dialogue between these different communities, all studying aspects of decision-making and learning with strategic feedback. The goal is to identify common points of interest and open problems in the different subareas, as well as to encourage cross-disciplinary collaboration.
Workshop: Deployable Decision Making in Embodied Systems (DDM) Tue 14 Dec 07:00 a.m.
Embodied systems are playing an increasingly important role in our lives. Examples include, but are not limited to, autonomous driving, drone delivery, and service robots. In real-world deployments, the systems are required to safely learn and operate under the various sources of uncertainties. As noted in the “Roadmap for US Robotics (2020)”, safe learning and adaptation is a key aspect of next-generation robotics. Learning is ingrained in all components of the robotics software stack including perception, planning, and control. While the safety and robustness of these components have been identified as critical aspects for real-world deployments, open issues and challenges are often discussed separately in the respective communities. In this workshop, we aim to bring together researchers from machine learning, computer vision, robotics, and control to facilitate interdisciplinary discussions on the topic of deployable decision making in embodied systems. Our workshop will focus on two discussion themes: (i) safe learning and decision making in uncertain and unstructured environments and (ii) efficient transfer learning for deployable embodied systems. To facilitate discussions and solicit participation from a broad audience, we plan to have a set of interactive lecture-style presentations, focused discussion panels, and a poster session with contributed paper presentations. By bringing researchers and industry professionals together in our workshop and having detailed pre- and post-workshop plans, we envision this workshop to be an effort towards a long-term, interdisciplinary exchange on this topic.
4th Robot Learning Workshop: Self-Supervised and Lifelong Learning Tue 14 Dec 07:00 a.m.
Applying machine learning to real-world systems such as robots has been an important part of the NeurIPS community in past years. Progress in machine learning has enabled robots to demonstrate strong performance in helping humans in some household and care-taking tasks, manufacturing, logistics, transportation, and many other unstructured and human-centric environments. While these results are promising, access to high-quality, task-relevant data remains one of the largest bottlenecks for successful deployment of such technologies in the real world.
Methods to generate, re-use, and integrate more sources of valuable data, such as lifelong learning, transfer, and continuous improvement could unlock the next steps of performance. However, accessing these data sources comes with fundamental challenges, which include safety, stability, and the daunting issue of providing supervision for learning while the robot is in operation. Today, unique new opportunities are presenting themselves in this quest for robust, continuous learning: large-scale, self-supervised and multimodal approaches to learning are showing and often exceeding state-of-the-art supervised learning approaches; reinforcement and imitation learning are becoming more stable and data-efficient in real-world settings; new approaches combining strong, principled safety and stability guarantees with the expressive power of machine learning are emerging.
This workshop aims to discuss how these emerging trends in machine learning of self-supervision and lifelong learning can be best utilized in real-world robotic systems. We bring together experts with diverse perspectives on this topic to highlight the ways current successes in the field are changing the conversation around lifelong learning, and how this will affect the future of robotics, machine learning, and our ability to deploy intelligent, self-improving agents to enhance people's lives.
Our speaker talks have been prerecorded and are available on YouTube. The talks will NOT be replayed during the workshop. We encourage all participants to watch them ahead of time to make the panel discussions with the speakers more engaging and insightful.
More information can be found on the website: http://www.robot-learning.ml/2021/.
2nd Workshop on Self-Supervised Learning: Theory and Practice Tue 14 Dec 07:00 a.m.
Self-supervised learning (SSL) is an unsupervised approach for representation learning without relying on human-provided labels. It creates auxiliary tasks on unlabeled input data and learns representations by solving these tasks. SSL has demonstrated great success on images (e.g., MoCo [19], PIRL [9], SimCLR [20]) and texts (e.g., BERT [21]) and has shown promising results in other data modalities, including graphs, time-series, audio, etc. On a wide variety of tasks, SSL without using human-provided labels achieves performance that is close to fully supervised approaches. The existing SSL research mostly focuses on improving the empirical performance without a theoretical foundation. While the proposed SSL approaches are empirically effective, theoretically why they perform well is not clear. For example, why certain auxiliary tasks in SSL perform better than others? How many unlabeled data examples are needed by SSL to learn a good representation? How is the performance of SSL affected by neural architectures? In this workshop, we aim to bridge this gap between theory and practice. We bring together SSL-interested researchers from various domains to discuss the theoretical foundations of empirically well-performing SSL approaches and how the theoretical insights can further improve SSL’s empirical performance. Different from previous SSL-related workshops which focus on empirical effectiveness of SSL approaches without considering their theoretical foundations, our workshop focuses on establishing the theoretical foundation of SSL and providing theoretical insights for developing new SSL approaches.
Workshop: Machine learning from ground truth: New medical imaging datasets for unsolved medical problems. Tue 14 Dec 07:00 a.m.
This workshop will launch a new platform for open medical imaging datasets. Labeled with ground-truth outcomes curated around a set of unsolved medical problems, these data will deepen ways in which ML can contribute to health and raise a new set of technical challenges.
Workshop: Physical Reasoning and Inductive Biases for the Real World Tue 14 Dec 08:00 a.m.
Much progress has been made on end-to-end learning for physical understanding and reasoning. If successful, understanding and reasoning about the physical world promises far-reaching applications in robotics, machine vision, and the physical sciences. Despite this recent progress, our best artificial systems pale in comparison to the flexibility and generalization of human physical reasoning.
Neural information processing systems have shown promising empirical results on synthetic datasets, yet do not transfer well when deployed in novel scenarios (including the physical world). If physical understanding and reasoning techniques are to play a broader role in the physical world, they must be able to function across a wide variety of scenarios, including ones that might lie outside the training distribution. How can we design systems that satisfy these criteria?
Our workshop aims to investigate this broad question by bringing together experts from machine learning, the physical sciences, cognitive and developmental psychology, and robotics to investigate how these techniques may one day be employed in the real world. In particular, we aim to investigate the following questions: 1. What forms of inductive biases best enable the development of physical understanding techniques that are applicable to real-world problems? 2. How do we ensure that the outputs of a physical reasoning module are reasonable and physically plausible? 3. Is interpretability a necessity for physical understanding and reasoning techniques to be suitable to real-world problems?
Unlike end-to-end neural architectures that distribute bias across a large set of parameters, modern structured physical reasoning modules (differentiable physics, relational learning, probabilistic programming) maintain modularity and physical interpretability. We will discuss how these inductive biases might aid in generalization and interpretability, and how these techniques impact real-world problems.
Workshop: Data Centric AI Tue 14 Dec 08:30 a.m.
Data-Centric AI (DCAI) represents the recent transition from focusing on modeling to the underlying data used to train and evaluate models. Increasingly, common model architectures have begun to dominate a wide range of tasks, and predictable scaling rules have emerged. While building and using datasets has been critical to these successes, the endeavor is often artisanal -- painstaking and expensive. The community lacks high productivity and efficient open data engineering tools to make building, maintaining, and evaluating datasets easier, cheaper, and more repeatable. The DCAI movement aims to address this lack of tooling, best practices, and infrastructure for managing data in modern ML systems.
The main objective of this workshop is to cultivate the DCAI community into a vibrant interdisciplinary field that tackles practical data problems. We consider some of those problems to be: data collection/generation, data labeling, data preprocess/augmentation, data quality evaluation, data debt, and data governance. Many of these areas are nascent, and we hope to further their development by knitting them together into a coherent whole. Together we will define the DCAI movement that will shape the future of AI and ML. Please see our call for papers below to take an active role in shaping that future! If you have any questions, please reach out to the organizers (neurips-data-centric-ai@googlegroups.com)
The ML community has a strong track record of building and using datasets for AI systems. But this endeavor is often artisanal—painstaking and expensive. The community lacks high productivity and efficient open data engineering tools to make building, maintaining and evaluating datasets easier, cheaper and more repeatable. So, the core challenge is to accelerate dataset creation and iteration together with increasing the efficiency of use and reuse by democratizing data engineering and evaluation.
If 80 percent of machine learning work is data preparation, then ensuring data quality is the most important work of a machine learning team and therefore a vital research area. Human-labeled data has increasingly become the fuel and compass of AI-based software systems - yet innovative efforts have mostly focused on models and code. The growing focus on scale, speed, and cost of building and improving datasets has resulted in an impact on quality, which is nebulous and often circularly defined, since the annotators are the source of data and ground truth [Riezler, 2014]. The development of tools to make repeatable and systematic adjustments to datasets has also lagged. While dataset quality is still the top concern everyone has, the ways in which that is measured in practice is poorly understood and sometimes simply wrong. A decade later, we see some cause for concern: fairness and bias issues in labeled datasets [Goel and Faltings, 2019], quality issues in datasets [Crawford and Paglen, 2019], limitations of benchmarks [Kovaleva et al., 2019, Welty et al., 2019] reproducibility concerns in machine learning research [Pineau et al., 2018, Gunderson and Kjensmo, 2018], lack of documentation and replication of data [Katsuno et al., 2019], and unrealistic performance metrics [Bernstein 2021].
We need a framework for excellence in data engineering that does not yet exist. In the first to market rush with data, aspects of maintainability, reproducibility, reliability, validity, and fidelity of datasets are often overlooked. We want to turn this way of thinking on its head and highlight examples, case-studies, methodologies for excellence in data collection. Building an active research community focused on Data Centric AI is an important part of the process of defining the core problems and creating ways to measure progress in machine learning through data quality tasks.
Workshop: Math AI for Education (MATHAI4ED): Bridging the Gap Between Research and Smart Education Tue 14 Dec 08:55 a.m.
Mathematical reasoning is a unique aspect of human intelligence and a fundamental building block for scientific and intellectual pursuits. However, learning mathematics is often a challenging human endeavor that relies on expert instructors to create, teach and evaluate mathematical material. From an educational perspective, AI systems that aid in this process offer increased inclusion and accessibility, efficiency, and understanding of mathematics. Moreover, building systems capable of understanding, creating, and using mathematics offers a unique setting for studying reasoning in AI. This workshop will investigate the intersection of mathematics education and AI, including applications to teaching, evaluation, and assisting. Enabling these applications requires not only innovations in math AI research, but also a better understanding of the challenges in real-world education scenarios. Hence, we will bring together a group of experts from a diverse set of backgrounds, institutions, and disciplines to drive progress on these and other real-world education scenarios, and to discuss the promise and challenge of integrating mathematical AI into education.
Workshop: Offline Reinforcement Learning Tue 14 Dec 09:00 a.m.
Offline reinforcement learning (RL) is a re-emerging area of study that aims to learn behaviors using only logged data, such as data from previous experiments or human demonstrations, without further environment interaction. It has the potential to make tremendous progress in a number of real-world decision-making problems where active data collection is expensive (e.g., in robotics, drug discovery, dialogue generation, recommendation systems) or unsafe/dangerous (e.g., healthcare, autonomous driving, or education). Such a paradigm promises to resolve a key challenge to bringing reinforcement learning algorithms out of constrained lab settings to the real world. The first edition of the offline RL workshop, held at NeurIPS 2020, focused on and led to algorithmic development in offline RL. This year we propose to shift the focus from algorithm design to bridging the gap between offline RL research and real-world offline RL. Our aim is to create a space for discussion between researchers and practitioners on topics of importance for enabling offline RL methods in the real world. To that end, we have revised the topics and themes of the workshop, invited new speakers working on application-focused areas, and building on the lively panel discussion last year, we have invited the panelists from last year to participate in a retrospective panel on their changing perspectives.
For details on submission please visit: https://offline-rl-neurips.github.io/2021 (Submission deadline: October 6, Anywhere on Earth)
Speakers:
Aviv Tamar (Technion - Israel Inst. of Technology)
Angela Schoellig (University of Toronto)
Barbara Engelhardt (Princeton University)
Sham Kakade (University of Washington/Microsoft)
Minmin Chen (Google)
Philip S. Thomas (UMass Amherst)
Workshop: Causal Inference Challenges in Sequential Decision Making: Bridging Theory and Practice Tue 14 Dec 10:50 a.m.
Sequential decision-making problems appear in settings as varied as healthcare, e-commerce, operations management, and policymaking, and depending on the context these can have very varied features that make each problem unique. Problems can involve online learning or offline data, known cost structures or unknown counterfactuals, continuous actions with or without constraints or finite or combinatorial actions, stationary environments or environments with dynamic agents, utilitarian considerations or fairness or equity considerations. More and more, causal inference and discovery and adjacent statistical theories have come to bear on such problems, from the early work on longitudinal causal inference from the last millenium up to recent developments in bandit algorithms and inference, dynamic treatment regimes, both online and offline reinforcement learning, interventions in general causal graphs and discovery thereof, and more. While the interaction between these theories has grown, expertise is spread across many different disciplines, including CS/ML, (bio)statistics, econometrics, ethics/law, and operations research.
The primary purpose of this workshop is to convene both experts, practitioners, and interested young researchers from a wide range of backgrounds to discuss recent developments around causal inference in sequential decision making and the avenues forward on the topic, especially ones that bring together ideas from different fields. The all-virtual nature of this year's NeurIPS workshop makes it particularly felicitous to such an assembly. The workshop will combine invited talks and panels by a diverse group of researchers and practitioners from both academia and industry together with contributed talks and town-hall Q\&A that will particularly seek to draw from younger individuals new to the area.