This is the public, feature-limited version of the conference webpage. After Registration and login please visit the full version.

Topological Data Analysis and Beyond

Bastian Rieck, Frederic Chazal, Smita Krishnaswamy, Roland Kwitt, Karthi Natesan Ramamurthy, Yuhei Umeda, Guy Wolf
2020-12-10T23:00:00-08:00 - 2020-12-11T12:00:00-08:00
The last decade saw an enormous boost in the field of computational topology: methods and concepts from algebraic and differential topology, formerly confined to the realm of pure mathematics, have demonstrated their utility in numerous areas such as computational biology, personalised medicine, materials science, and time-dependent data analysis, to name a few.

The newly-emerging domain comprising topology-based techniques is often referred to as topological data analysis (TDA). Next to their applications in the aforementioned areas, TDA methods have also proven to be effective in supporting, enhancing, and augmenting both classical machine learning and deep learning models.

We believe that it is time to bring together theorists and practitioners in a creative environment to discuss the goals beyond the currently-known bounds of TDA. We want to start a conversation between experts, non-experts, and users of TDA methods to debate the next steps the field should take. We also want to disseminate methods to a broader audience and demonstrate how easy the integration of topological concepts into existing methods can be.

Privacy Preserving Machine Learning - PriML and PPML Joint Edition

Borja Balle, James Bell, Aurélien Bellet, Kamalika Chaudhuri, Adria Gascon, Antti Honkela, Antti Koskela, Casey Meehan, Olga Ohrimenko, Mi Jung Park, Mariana Raykova, Mary Anne Smart, Yu-Xiang Wang, Adrian Weller
2020-12-11T00:00:00-08:00 - 2020-12-11T09:25:00-08:00
This one day workshop focuses on privacy preserving techniques for machine learning and disclosure in large scale data analysis, both in the distributed and centralized settings, and on scenarios that highlight the importance and need for these techniques (e.g., via privacy attacks). There is growing interest from the Machine Learning (ML) community in leveraging cryptographic techniques such as Multi-Party Computation (MPC) and Homomorphic Encryption (HE) for privacy preserving training and inference, as well as Differential Privacy (DP) for disclosure. Simultaneously, the systems security and cryptography community has proposed various secure frameworks for ML. We encourage both theory and application-oriented submissions exploring a range of approaches listed below. Additionally, given the tension between the adoption of machine learning technologies and ethical, technical and regulatory issues about privacy, as highlighted during the COVID-19 pandemic, we invite submissions for the special track on this topic.

Learning Meets Combinatorial Algorithms

Marin Vlastelica, Jialin Song, Aaron Ferber, Brandon Amos, Georg Martius, Bistra Dilkina, Yisong Yue
2020-12-11T03:00:00-08:00 - 2020-12-12T16:00:00-08:00
We propose to organize a workshop on machine learning and combinatorial algorithms. The combination of methods from machine learning and classical AI is an emerging trend. Many researchers have argued that “future AI” methods somehow need to incorporate discrete structures and symbolic/algorithmic reasoning. Additionally, learning-augmented optimization algorithms can impact the broad range of difficult but impactful optimization settings. Coupled learning and combinatorial algorithms have the ability to impact real-world settings such as hardware & software architectural design, self-driving cars, ridesharing, organ matching, supply chain management, theorem proving, and program synthesis among many others. We aim to present diverse perspectives on the integration of machine learning and combinatorial algorithms.

This workshop aims to bring together academic and industrial researchers in order to describe recent advances and build lasting communication channels for the discussion of future research directions pertaining the integration of machine learning and combinatorial algorithms. The workshop will connect researchers with various relevant backgrounds, such as those working on hybrid methods, have particular expertise in combinatorial algorithms, work on problems whose solution likely requires new approaches, as well as everyone interested in learning something about this emerging field of research. We aim to highlight open problems in bridging the gap between machine learning and combinatorial optimization in order to facilitate new research directions.
The workshop will foster the collaboration between the communities by curating a list of problems and challenges to promote the research in the field.

Our technical topics of interest include (but are not limited to):
- Hybrid architectures with combinatorial building blocks
- Attacking hard combinatorial problems with learning
- Neural architectures mimicking combinatorial algorithms

Further information about speakers, paper submissions and schedule are available at the workshop website: https://sites.google.com/view/lmca2020/home .

Meta-Learning

Jane Wang, Joaquin Vanschoren, Erin Grant, Jonathan Schwarz, Francesco Visin, Jeff Clune, Roberto Calandra
2020-12-11T03:00:00-08:00 - 2020-12-11T12:00:00-08:00
Recent years have seen rapid progress in meta-learning methods, which transfer knowledge across tasks and domains to learn new tasks more efficiently, optimize the learning process itself, and even generate new learning methods from scratch. Meta-learning can be seen as the logical conclusion of the arc that machine learning has undergone in the last decade, from learning classifiers and policies over hand-crafted features, to learning representations over which classifiers and policies operate, and finally to learning algorithms that themselves acquire representations, classifiers, and policies.

Meta-learning methods are of substantial practical interest. For instance, they have been shown to yield new state-of-the-art automated machine learning algorithms and architectures, and have substantially improved few-shot learning systems. Moreover, the ability to improve one’s own learning capabilities through experience can also be viewed as a hallmark of intelligent beings, and there are strong connections with work on human learning in cognitive science and reward learning in neuroscience.

Tackling Climate Change with ML

David Dao, Evan Sherwin, Priya Donti, Lauren Kuntz, Lynn Kaack, Yumna Yusuf, David Rolnick, Catherine Nakalembe, Claire Monteleoni, Yoshua Bengio
2020-12-11T03:00:00-08:00 - 2020-12-11T16:00:00-08:00
Climate change is one of the greatest problems society has ever faced, with increasingly severe consequences for humanity as natural disasters multiply, sea levels rise, and ecosystems falter. Since climate change is a complex issue, action takes many forms, from designing smart electric grids to tracking greenhouse gas emissions through satellite imagery. While no silver bullet, machine learning can be an invaluable tool in fighting climate change via a wide array of applications and techniques. These applications require algorithmic innovations in machine learning and close collaboration with diverse fields and practitioners. This workshop is intended as a forum for those in the machine learning community who wish to help tackle climate change. Building on our past workshops on this topic, this workshop aims to especially emphasize the pipeline to impact, through conversations about machine learning with decision-makers and other global leaders in implementing climate change strategies. The all-virtual format of NeurIPS 2020 provides a special opportunity to foster cross-pollination between researchers in machine learning and experts in complementary fields.

OPT2020: Optimization for Machine Learning

Courtney Paquette, Mark Schmidt, Sebastian Stich, Quanquan Gu, Martin Takac
2020-12-11T03:15:00-08:00 - 2020-12-11T16:30:00-08:00
Optimization lies at the heart of many machine learning algorithms and enjoys great interest in our community. Indeed, this intimate relation of optimization with ML is the key motivation for the OPT series of workshops.

Looking back over the past decade, a strong trend is apparent: The intersection of OPT and ML has grown to the point that now cutting-edge advances in optimization often arise from the ML community. The distinctive feature of optimization within ML is its departure from textbook approaches, in particular, its focus on a different set of goals driven by "big-data, nonconvexity, and high-dimensions," where both theory and implementation are crucial.

We wish to use OPT 2020 as a platform to foster discussion, discovery, and dissemination of the state-of-the-art in optimization as relevant to machine learning. And well beyond that: as a platform to identify new directions and challenges that will drive future research, and continue to build the OPT+ML joint research community.

**Invited Speakers**
Volkan Cevher (EPFL)
Michael Friedlander (UBC)
Donald Goldfarb (Columbia)
Andreas Krause (ETH, Zurich)
Suvrit Sra (MIT)
Rachel Ward (UT Austin)
Ashia Wilson (MSR)
Tong Zhang (HKUST)

Please join us in gather.town for all breaks and poster sessions (for link see any abstract for a break or poster session, opens on December 11).

Advances and Opportunities: Machine Learning for Education

Kumar Garg, Neil Heffernan, Kayla Meyers
2020-12-11T05:30:00-08:00 - 2020-12-11T14:10:00-08:00
This workshop will explore how advances in machine learning could be applied to improve educational outcomes.

Such an exploration is timely given: the growth of online learning platforms, which have the potential to serve as testbeds and data sources; a growing pool of CS talent hungry to apply their skills towards social impact; and the chaotic shift to online learning globally during COVID-19, and the many gaps it has exposed.

The opportunities for machine learning in education are substantial, from uses of NLP to power automated feedback for the substantial amounts of student work that currently gets no review, to advances in voice recognition diagnosing errors by early readers.

Similar to the rise of computational biology, recognizing and realizing these opportunities will require a community of researchers and practitioners that are bilingual: technically adept at the cutting-edge advances in machine learning, and conversant in most pressing challenges and opportunities in education.

With representation from senior representatives from industry, academia, government, and education, this workshop is a step in that community-building process, with a focus on three things:
1. identifying what learning platforms are of a size and instrumentation that the ML community can leverage,
2. building a community of experts bringing rigorous theoretical and methodological insights across academia, industry, and education, to facilitate combinatorial innovation,
3. scoping potential Kaggle competitions and “ImageNets for Education,” where benchmark datasets fine tuned to an education goal can fuel goal-driven algorithmic innovation.

In addition to bringing speakers across verticals and issue areas, the talks and small group conversations in this workshop will be designed for a diverse audience--from researchers, to industry professionals, to teachers and students. This interdisciplinary approach promises to generate new connections, high-potential partnerships, and inspire novel applications for machine learning in education.

​This workshop is not the first Machine Learning for Education workshop; there has been several (ml4ed.cc), and the existence of these others speaks to recognition of the the obvious importance that ML will have for education moving forward!

Differential Geometry meets Deep Learning (DiffGeo4DL)

Joey Bose, Emile Mathieu, Charline Le Lan, Ines Chami, Fred Sala, Christopher De Sa, Maximillian Nickel, Chris Ré, Will Hamilton
2020-12-11T05:45:00-08:00 - 2020-12-11T14:00:00-08:00
Recent years have seen a surge in research at the intersection of differential geometry and deep learning, including techniques for stochastic optimization on curved spaces (e.g., hyperbolic or spherical manifolds), learning embeddings for non-Euclidean data, and generative modeling on Riemannian manifolds. Insights from differential geometry have led to new state of the art approaches to modeling complex real world data, such as graphs with hierarchical structure, 3D medical data, and meshes.
Thus, it is of critical importance to understand, from a geometric lens, the natural invariances, equivariances, and symmetries that reside within data.

In order to support the burgeoning interest of differential geometry in deep learning, the primary goal for this workshop is to facilitate community building and to work towards the identification of key challenges in comparison with regular deep learning, along with techniques to overcome these challenges. With many new researchers beginning projects in this area, we hope to bring them together to consolidate this fast-growing area into a healthy and vibrant subfield. In particular, we aim to strongly promote novel and exciting applications of differential geometry for deep learning with an emphasis on bridging theory to practice which is reflected in our choices of invited speakers, which include both machine learning practitioners and researchers who are primarily geometers.

Machine Learning for Health (ML4H): Advancing Healthcare for All

Stephanie Hyland, Allen Schmaltz, Charles Onu, Ehi Nosakhare, Emily Alsentzer, Irene Y Chen, Matthew McDermott, Subhrajit Roy, Benjamin Akera, Dani Kiyasseh, Fabian Falck, Griffin Adams, Ioana Bica, Oliver J Bear Don't Walk IV, Suproteem Sarkar, Stephen Pfohl, Andrew Beam, Brett Beaulieu-Jones, Danielle Belgrave, Tristan Naumann
2020-12-11T06:00:00-08:00 - 2020-12-11T16:20:00-08:00
The application of machine learning to healthcare is often characterised by the development of cutting-edge technology aiming to improve patient outcomes. By developing sophisticated models on high-quality datasets we hope to better diagnose, forecast, and otherwise characterise the health of individuals. At the same time, when we build tools which aim to assist highly-specialised caregivers, we limit the benefit of machine learning to only those who can access such care. The fragility of healthcare access both globally and locally prompts us to ask, “How can machine learning be used to help enable healthcare for all?” - the theme of the 2020 ML4H workshop.

Participants at the workshop will be exposed to new questions in machine learning for healthcare, and be prompted to reflect on how their work sits within larger healthcare systems. Given the growing community of researchers in machine learning for health, the workshop will provide an opportunity to discuss common challenges, share expertise, and potentially spark new research directions. By drawing in experts from adjacent disciplines such as public health, fairness, epidemiology, and clinical practice, we aim to further strengthen the interdisciplinarity of machine learning for health.

See our workshop for more information: https://ml4health.github.io/

Workshop on Dataset Curation and Security

Nathalie Baracaldo Angel, Yonatan Bisk, Avrim Blum, Michael Curry, John Dickerson, Micah Goldblum, Tom Goldstein, Bo Li, Avi Schwarzschild
2020-12-11T06:00:00-08:00 - 2020-12-11T11:00:00-08:00
Classical machine learning research has been focused largely on models, optimizers, and computational challenges. As technical progress and hardware advancements ease these challenges, practitioners are now finding that the limitations and faults of their models are the result of their datasets. This is particularly true of deep networks, which often rely on huge datasets that are too large and unwieldy for domain experts to curate them by hand. This workshop addresses issues in the following areas: data harvesting, dealing with the challenges and opportunities involved in creating and labeling massive datasets; data security, dealing with protecting datasets against risks of poisoning and backdoor attacks; policy, security, and privacy, dealing with the social, ethical, and regulatory issues involved in collecting large datasets, especially with regards to privacy; and data bias, related to the potential of biased datasets to result in biased models that harm members of certain groups. Dates and details can be found at [securedata.lol](https://securedata.lol/)

Learning Meaningful Representations of Life (LMRL.org)

Elizabeth Wood, Debora Marks, Thouis Jones, Adji Dieng, Alan Aspuru-Guzik, Anshul Kundaje, Barbara Engelhardt, Chang Liu, Edward Boyden, Kresten Lindorff-Larsen, Mor Nitzan, Smita Krishnaswamy, Wouter Boomsma, Yixin Wang, David Van Valen, Orr Ashenberg
2020-12-11T06:00:00-08:00 - 2020-12-11T18:15:00-08:00
This workshop is designed to bring together trainees and experts in machine learning with those in the very forefront of biological research today for this purpose. Our full-day workshop will advance the joint project of the CS and biology communities with the goal of "Learning Meaningful Representations of Life" (LMRL), emphasizing interpretable representation learning of structure and principle. As last year, the workshop will be oriented around four layers of biological abstraction: molecule, cell, synthetic biology, and phenotypes.

Mapping structural molecular detail to organismal phenotype and function; predicting emergent effects of human genetic variation; and designing novel interventions including prevention, diagnostics, therapeutics, and the development of new synthetic biotechnologies for causal investigations are just some of the challenges that hinge on appropriate formal structures to make them accessible to the broadest possible community of computer scientists, statisticians, and their tools.

Human in the loop dialogue systems

Behnam Hedayatnia, Rahul Goel, Shereen Oraby, Abigail See, Chandra Khatri, Y-Lan Boureau, Alborz Geramifard, Marilyn Walker, Dilek Hakkani-Tur
2020-12-11T06:10:00-08:00 - 2020-12-11T17:20:00-08:00
Conversational interaction systems such as Amazon Alexa, Google Assistant, Apple Siri, and Microsoft Cortana have become very popular over the recent years. Such systems have allowed users to interact with a wide variety of content on the web through a conversational interface. Research challenges such as the Dialogue System Technology Challenges, Dialogue Dodecathlon, Amazon Alexa Prize and the Vision and Language Navigation task have continued to inspire research in conversational AI. These challenges have brought together researchers from different communities such as speech recognition, spoken language understanding, reinforcement learning, language generation, and multi-modal question answering.
Unlike other popular NLP tasks, dialogue frequently has humans in the loop, whether it is for evaluation, active learning or online reward estimation. Through this workshop we aim to bring together researchers from academia and industry to discuss the challenges and opportunities in such human in the loop setups. We hope that this sparks interesting discussions about conversational agents, interactive systems, and how we can use humans most effectively when building such setups. We will highlight areas such as human evaluation setups, reliability in human evaluation, human in the loop training, interactive learning and user modeling. We also highly encourage non-English based dialogue systems in these areas.
The one-day workshop will include talks from senior technical leaders and researchers to share insights associated with evaluating dialogue systems. We also plan on having oral presentations and poster sessions on works related to the topic of the workshop. Finally we will end the workshop with an interactive panel of speakers. As an outcome we expect the participants from the NeurIPS community to walk away with better understanding of human in the loop dialogue modeling as well as key areas of research in this field. Additionally we would like to see discussions around the unification of human evaluation setups in some way.

The pre-registration experiment: an alternative publication model for machine learning research

Luca Bertinetto, João F. Henriques, Samuel Albanie, Michela Paganini, Gul Varol
2020-12-11T06:15:00-08:00 - 2020-12-11T14:30:00-08:00
Machine learning research has benefited considerably from the adoption of standardised public benchmarks. In this workshop proposal, we do not argue against the importance of these benchmarks, but rather against the current incentive system and its heavy reliance upon performance as a proxy for scientific progress. The status quo incentivises researchers to “beat the state of the art”, potentially at the expense of deep scientific understanding and rigorous experimental design. Since typically only positive results are rewarded, the negative results inevitably encountered during research are often omitted, allowing many other groups to unknowingly and wastefully repeat the same negative findings. Pre-registration is a publishing and reviewing model that aims to address these issues by changing the incentive system. A pre-registered paper is a regular paper that is submitted for peer-review without any experimental results, describing instead an experimental protocol to be followed after the paper is accepted. This implies that it is important for the authors to make compelling arguments from theory or past published evidence. As for reviewers, they must assess these arguments together with the quality of the experimental design, rather than comparing numeric results. In this workshop, we propose to conduct a full pilot study in pre-registration for machine learning. It follows a successful small-scale trial of pre-registration in computer vision and is more broadly inspired by the success of pre-registration in the life sciences.

Differentiable computer vision, graphics, and physics in machine learning

Krishna Jatavallabhula, Kelsey Allen, Victoria Dean, Johanna Hansen, Shuran Song, Florian Shkurti, Liam Paull, Derek Nowrouzezahrai, Josh Tenenbaum
2020-12-11T06:45:00-08:00 - 2020-12-11T14:30:00-08:00
“Differentiable programs” are parameterized programs that allow themselves to be rewritten by gradient-based optimization. They are ubiquitous in modern-day machine learning. Recently, explicitly encoding our knowledge of the rules of the world in the form of differentiable programs has become more popular. In particular, differentiable realizations of well-studied processes such as physics, rendering, projective geometry, optimization to name a few, have enabled the design of several novel learning techniques. For example, many approaches have been proposed for unsupervised learning of depth estimation from unlabeled videos. Differentiable 3D reconstruction pipelines have demonstrated the potential for task-driven representation learning. A number of differentiable rendering approaches have been shown to enable single-view 3D reconstruction and other inverse graphics tasks (without requiring any form of 3D supervision). Differentiable physics simulators are being built to perform physical parameter estimation from video or for model-predictive control. While these advances have largely occurred in isolation, recent efforts have attempted to bridge the gap between the aforementioned areas. Narrowing the gaps between these otherwise isolated disciplines holds tremendous potential to yield new research directions and solve long-standing problems, particularly in understanding and reasoning about the 3D world.

Hence, we propose the “first workshop on differentiable computer vision, graphics, and physics in machine learning” with the aim of:
1. Narrowing the gap and fostering synergies between the computer vision, graphics, physics, and machine learning communities
2. Debating the promise and perils of differentiable methods, and identifying challenges that need to be overcome
3. Raising awareness about these techniques to the larger ML community
4. Discussing the broader impact of such techniques, and any ethical implications thereof.

Self-Supervised Learning for Speech and Audio Processing

Abdelrahman Mohamed, Hung-yi Lee, Shinji Watanabe, Shang-Wen Li, Tara Sainath, Karen Livescu
2020-12-11T06:50:00-08:00 - 2020-12-11T16:25:00-08:00
There is a trend in the machine learning community to adopt self-supervised approaches to pre-train deep networks. Self-supervised learning utilizes proxy supervised learning tasks, for example, distinguishing parts of the input signal from distractors, or generating masked input segments conditioned on the unmasked ones, to obtain training data from unlabeled corpora. These approaches make it possible to use a tremendous amount of unlabeled data on the web to train large networks and solve complicated tasks. ELMo, BERT, and GPT in NLP are famous examples in this direction. Recently self-supervised approaches for speech and audio processing are also gaining attention. These approaches combine methods for utilizing no or partial labels, unpaired text and audio data, contextual text and video supervision, and signals from user interactions. Although the research direction of self-supervised learning is active in speech and audio processing, current works are limited to several problems such as automatic speech recognition, speaker identification, and speech translation, partially due to the diversity of modeling in various speech and audio processing problems. There is still much unexplored territory in the research direction for self-supervised learning.

This workshop will bring concentrated discussions on self-supervision for the field of speech and audio processing via several invited talks, oral and poster sessions with high-quality papers, and a panel of leading researchers from academia and industry. Alongside research work on new self-supervised methods, data, applications, and results, this workshop will call for novel work on understanding, analyzing, and comparing different self-supervision approaches for speech and audio processing. The workshop aims to:
- Review existing and inspire new self-supervised methods and results,
- Motivate the application of self-supervision approaches to more speech and audio processing problems in academia and industry, and encourage discussion amongst experts and practitioners from the two realms,
- Encourage works on studying methods for understanding learned representations, comparing different self-supervision methods and comparing self-supervision to other self-training as well as transfer learning methods that low-resource speech and audio processing have long utilized,
- Facilitate communication within the field of speech and audio processing (e.g., people who attend conferences such as INTERSPEECH and ICASSP) as well as between the field and the whole machine learning community for sharing knowledge, ideas, and data, and encourage future collaboration to inspire innovation in the field and the whole community.

Causal Discovery and Causality-Inspired Machine Learning

Biwei Huang, Sara Magliacane, Kun Zhang, Danielle Belgrave, Elias Bareinboim, Daniel Malinsky, Thomas Richardson, Christopher Meek, Peter Spirtes, Bernhard Schölkopf
2020-12-11T06:50:00-08:00 - 2020-12-11T16:50:00-08:00
Causality is a fundamental notion in science and engineering, and one of the fundamental problems in the field is how to find the causal structure or the underlying causal model. For instance, one focus of this workshop is on *causal discovery*, i.e., how can we discover causal structure over a set of variables from observational data with automated procedures? Another area of interest is *how a causal perspective may help understand and solve advanced machine learning problems*.

Recent years have seen impressive progress in theoretical and algorithmic developments of causal discovery from various types of data (e.g., from i.i.d. data, under distribution shifts or in nonstationary settings, under latent confounding or selection bias, or with missing data), as well as in practical applications (such as in neuroscience, climate, biology, and epidemiology). However, many practical issues, including confounding, the large scale of the data, the presence of measurement error, and complex causal mechanisms, are still to be properly addressed, to achieve reliable causal discovery in practice.

Moreover, causality-inspired machine learning (in the context of transfer learning, reinforcement learning, deep learning, etc.) leverages ideas from causality to improve generalization, robustness, interpretability, and sample efficiency and is attracting more and more interest in Machine Learning (ML) and Artificial Intelligence. Despite the benefit of the causal view in transfer learning and reinforcement learning, some tasks in ML, such as dealing with adversarial attacks and learning disentangled representations, are closely related to the causal view but are currently underexplored, and cross-disciplinary efforts may facilitate the anticipated progress.

This workshop aims to provide a forum for discussion for researchers and practitioners in machine learning, statistics, healthcare, and other disciplines to share their recent research in causal discovery and to explore the possibility of interdisciplinary collaboration. We also particularly encourage real applications, such as in neuroscience, biology, and climate science, of causal discovery methods.

*************
After each keynote, there will be 5 minutes for a live Q&A. You may post your questions in Rocket.Chat before or during the keynote time. The poster session and the virtual coffee break will be on Gather.Town. There is no Q&A for orals and spotlight talks, but all papers will attend the poster session and you can interact with authors there. More details will come soon.

Resistance AI Workshop

Suzanne Kite, Mattie Tesfaldet, J Khadijah Abdurahman, William Agnew, Elliot Creager, Agata Foryciarz, Raphael Gontijo Lopes, Pratyusha Kalluri, Marie-Therese Png, Manuel Sabin, Maria Skoularidou, Ramon Vilarino, Rose Wang, Sayash Kapoor
2020-12-11T07:00:00-08:00 - 2020-12-11T17:30:00-08:00
It has become increasingly clear in the recent years that AI research, far from producing neutral tools, has been concentrating power in the hands of governments and companies and away from marginalized communities. Unfortunately, NeurIPS has lacked a venue explicitly dedicated to understanding and addressing the root of these problems. As Black feminist scholar Angela Davis famously said, "Radical simply means grasping things at the root." Resistance AI exposes the root problem of AI to be how technology is used to rearrange power in the world. AI researchers engaged in Resistance AI both resist AI that centralizes power into the hands of the few and dream up and build human/AI systems that put power in the hands of the people. This workshop will enable AI researchers in general, researchers engaged in Resistance AI, and marginalized communities in particular to reflect on AI-fueled inequity and co-create tactics for how to address this issue in our own work.

Machine Learning and the Physical Sciences

Anima Anandkumar, Kyle Cranmer, Shirley Ho, Mr. Prabhat, Lenka Zdeborová, Atilim Gunes Baydin, Juan Carrasquilla, Adji Dieng, Karthik Kashinath, Gilles Louppe, Brian Nord, Michela Paganini, Savannah Thais
2020-12-11T07:00:00-08:00 - 2020-12-11T15:15:00-08:00
Machine learning methods have had great success in learning complex representations that enable them to make predictions about unobserved data. Physical sciences span problems and challenges at all scales in the universe: from finding exoplanets in trillions of sky pixels, to finding machine learning inspired solutions to the quantum many-body problem, to detecting anomalies in event streams from the Large Hadron Collider. Tackling a number of associated data-intensive tasks including, but not limited to, segmentation, 3D computer vision, sequence modeling, causal reasoning, and efficient probabilistic inference are critical for furthering scientific discovery. In addition to using machine learning models for scientific discovery, the ability to interpret what a model has learned is receiving an increasing amount of attention.

In this targeted workshop, we would like to bring together computer scientists, mathematicians and physical scientists who are interested in applying machine learning to various outstanding physical problems, in particular in inverse problems and approximating physical processes; understanding what the learned model really represents; and connecting tools and insights from physical sciences to the study of machine learning models. In particular, the workshop invites researchers to contribute papers that demonstrate cutting-edge progress in the application of machine learning techniques to real-world problems in physical sciences, and using physical insights to understand what the learned model means.

By bringing together machine learning researchers and physical scientists who apply machine learning, we expect to strengthen the interdisciplinary dialogue, introduce exciting new open problems to the broader community, and stimulate production of new approaches to solving open problems in sciences. Invited talks from leading individuals in both communities will cover the state-of-the-art techniques and set the stage for this workshop.

ML Competitions at the Grassroots (CiML 2020)

Tara Chklovski, Adrienne Mendrik, Amir Banifatemi, Gustavo Stolovitzky
2020-12-11T07:00:00-08:00 - 2020-12-11T12:30:00-08:00
For the eighth edition of the CiML (Challenges in Machine Learning) workshop at NeurIPS, our goals are to: 1) Increase diversity in the participant community in order to increase quality of model predictions; 2) Identify and share best practices in building AI capability in vulnerable communities; 3) Celebrate pioneers from these communities who are modeling lifelong learning, curiosity and courage in learning how to use ML to address critical problems in their communities.

The workshop will provide concrete recommendations to the ML community on designing and implementing competitions that are more accessible to a broader public, and more effective in building long-term AI/ML capability.

The workshop will feature keynote speakers from ML, behavioral science and gender and development, interspersed with small group discussions around best practices in implementing ML competitions. We will invite submissions of 2-page extended abstracts on topics relating to machine learning competitions, with a special focus on methods of creating diverse datasets, strategies for addressing behavioral barriers to participation in ML competitions from underrepresented communities, and strategies for measuring the long-term impact of participation in an ML competition.

3rd Robot Learning Workshop

Masha Itkina, Alex Bewley, Roberto Calandra, Igor Gilitschenski, Julien PEREZ, Ransalu Senanayake, Markus Wulfmeier, Vincent Vanhoucke
2020-12-11T07:30:00-08:00 - 2020-12-11T19:30:00-08:00
In the proposed workshop, we aim to discuss the challenges and opportunities for machine learning research in the context of physical systems. This discussion involves the presentation of recent methods and the experiences made during the deployment on real-world platforms. Such deployment requires a significant degree of generalization. Namely, the real world is vastly more complex and diverse compared to fixed curated datasets and simulations. Deployed machine learning models must scale to this complexity, be able to adapt to novel situations, and recover from mistakes. Moreover, the workshop aims to strengthen further the ties between the robotics and machine learning communities by discussing how their respective recent directions result in new challenges, requirements, and opportunities for future research.

Following the success of previous robot learning workshops at NeurIPS, the goal of this workshop is to bring together a diverse set of scientists at various stages of their careers and foster interdisciplinary communication and discussion.
In contrast to the previous robot learning workshops which focused on applications in robotics for machine learning, this workshop extends the discussion on how real-world applications within the context of robotics can trigger various impactful directions for the development of machine learning. For a more engaging workshop, we encourage each of our senior presenters to share their presentations with a PhD student or postdoctoral researcher from their lab. Additionally, all our presenters - invited and contributed - are asked to add a ``dirty laundry’’ slide, describing the limitations and shortcomings of their work. We expect this will aid further discussion in poster and panel sessions in addition to helping junior researchers avoid similar roadblocks along their path.

Workshop on Deep Learning and Inverse Problems

Reinhard Heckel, Paul Hand, Richard Baraniuk, Lenka Zdeborová, Soheil Feizi
2020-12-11T07:30:00-08:00 - 2020-12-11T16:00:00-08:00
Learning-based methods, and in particular deep neural networks, have emerged as highly successful and universal tools for image and signal recovery and restoration. They achieve state-of-the-art results on tasks ranging from image denoising, image compression, and image reconstruction from few and noisy measurements. They are starting to be used in important imaging technologies, for example in GEs newest computational tomography scanners and in the newest generation of the iPhone.

The field has a range of theoretical and practical questions that remain unanswered. In particular, learning and neural network-based approaches often lack the guarantees of traditional physics-based methods. Further, while superior on average, learning-based methods can make drastic reconstruction errors, such as hallucinating a tumor in an MRI reconstruction or turning a pixelated picture of Obama into a white male.

This virtual workshop aims at bringing together theoreticians and practitioners in order to chart out recent advances and discuss new directions in deep neural network-based approaches for solving inverse problems in the imaging sciences and beyond. NeurIPS, with its visibility and attendance by experts in machine learning, offers the ideal frame for this exchange of ideas. We will use this virtual format to make this topic accessible to a broader audience than the in-person meeting is able to as described below.

Machine Learning for Autonomous Driving

Rowan McAllister, Xinshuo Weng, Daniel Omeiza, Nick Rhinehart, Fisher Yu, German Ros, Vladlen Koltun
2020-12-11T07:55:00-08:00 - 2020-12-11T17:00:00-08:00
Welcome to the NeurIPS 2020 Workshop on Machine Learning for Autonomous Driving!

Autonomous vehicles (AVs) offer a rich source of high-impact research problems for the machine learning (ML) community; including perception, state estimation, probabilistic modeling, time series forecasting, gesture recognition, robustness guarantees, real-time constraints, user-machine communication, multi-agent planning, and intelligent infrastructure. Further, the interaction between ML subfields towards a common goal of autonomous driving can catalyze interesting inter-field discussions that spark new avenues of research, which this workshop aims to promote. As an application of ML, autonomous driving has the potential to greatly improve society by reducing road accidents, giving independence to those unable to drive, and even inspiring younger generations with tangible examples of ML-based technology clearly visible on local streets.

All are welcome to submit and/or attend! This will be the 5th NeurIPS workshop in this series. Previous workshops in 2016, 2017, 2018 and 2019 enjoyed wide participation from both academia and industry.

First Workshop on Quantum Tensor Networks in Machine Learning

Xiao-Yang Liu, Qibin Zhao, Jacob Biamonte, Cesar F Caiafa, Paul Pu Liang, Nadav Cohen, Stefan Leichenauer
2020-12-11T08:00:00-08:00 - 2020-12-11T19:00:00-08:00
Quantum tensor networks in machine learning (QTNML) are envisioned to have great potential to advance AI technologies. Quantum machine learning promises quantum advantages (potentially exponential speedups in training, quadratic speedup in convergence, etc.) over classical machine learning, while tensor networks provide powerful simulations of quantum machine learning algorithms on classical computers. As a rapidly growing interdisciplinary area, QTNML may serve as an amplifier for computational intelligence, a transformer for machine learning innovations, and a propeller for AI industrialization.

Tensor networks, a contracted network of factor tensors, have arisen independently in several areas of science and engineering. Such networks appear in the description of physical processes and an accompanying collection of numerical techniques have elevated the use of quantum tensor networks into a variational model of machine learning. Underlying these algorithms is the compression of high-dimensional data needed to represent quantum states of matter. These compression techniques have recently proven ripe to apply to many traditional problems faced in deep learning. Quantum tensor networks have shown significant power in compactly representing deep neural networks, and efficient training and theoretical understanding of deep neural networks. More potential QTNML technologies are rapidly emerging, such as approximating probability functions, and probabilistic graphical models. However, the topic of QTNML is relatively young and many open problems are still to be explored.

Quantum algorithms are typically described by quantum circuits (quantum computational networks). These networks are indeed a class of tensor networks, creating an evident interplay between classical tensor network contraction algorithms and executing tensor contractions on quantum processors. The modern field of quantum enhanced machine learning has started to utilize several tools from tensor network theory to create new quantum models of machine learning and to better understand existing ones.

The interplay between tensor networks, machine learning and quantum algorithms is rich. Indeed, this interplay is based not just on numerical methods but on the equivalence of tensor networks to various quantum circuits, rapidly developing algorithms from the mathematics and physics communities for optimizing and transforming tensor networks, and connections to low-rank methods for learning. A merger of tensor network algorithms with state-of-the-art approaches in deep learning is now taking place. A new community is forming, which this workshop aims to foster.

Fair AI in Finance

Senthil Kumar, Cynthia Rudin, John Paisley, Isabelle Moulinier, C. Bayan Bruss, Eren K., Susan Tibbs, Oluwatobi Olabiyi, Simona Gandrabur, Svitlana Vyetrenko, Kevin Compher
2020-12-11T08:00:00-08:00 - 2020-12-11T17:27:00-08:00
The financial services industry has unique needs for fairness when adopting artificial intelligence and machine learning (AI/ML). First and foremost, there are strong ethical reasons to ensure that models used for activities such as credit decisioning and lending are fair and unbiased, or that machine reliance does not cause humans to miss critical pieces of data. Then there are the regulatory requirements to actually prove that the models are unbiased and that they do not discriminate against certain groups.

Emerging techniques such as algorithmic credit scoring introduce new challenges. Traditionally financial institutions have relied on a consumer’s past credit performance and transaction data to make lending decisions. But, with the emergence of algorithmic credit scoring, lenders also use alternate data such as those gleaned from social media and this immediately raises questions around systemic biases inherent in models used to understand customer behavior.

We also need to play careful attention to ways in which AI can not only be de-biased, but also how it can play an active role in making financial services more accessible to those historically shut out due to prejudice and other social injustices.

The aim of this workshop is to bring together researchers from different disciplines to discuss fair AI in financial services. For the first time, four major banks have come together to organize this workshop along with researchers from two universities as well as SEC and FINRA (Financial Industry Regulatory Authority). Our confirmed invited speakers come with different backgrounds including AI, law and cultural anthropology, and we hope that this will offer an engaging forum with diversity of thought to discuss the fairness aspects of AI in financial services. We are also planning a panel discussion on systemic bias and its impact on financial outcomes of different customer segments, and how AI can help.

Crowd Science Workshop: Remoteness, Fairness, and Mechanisms as Challenges of Data Supply by Humans for Automation

Daria Baidakova, Fabio Casati, Alexey Drutsa, Dmitry Ustalov
2020-12-11T08:00:00-08:00 - 2020-12-11T16:00:00-08:00
Despite the obvious advantages, automation driven by machine learning and artificial intelligence carries pitfalls for the lives of millions of people: disappearance of many well-established mass professions and consumption of labeled data that are produced by humans managed by out of time approach with full-time office work and pre-planned task types. Crowdsourcing methodology can be considered as an effective way to overcome these issues since it provides freedom for task executors in terms of place, time and which task type they want to work on. However, many potential participants of crowdsourcing processes hesitate to use this technology due to a series of doubts (that have not been removed during the past decade).

This workshop brings together people studying research questions on

(a) quality and effectiveness in remote crowd work;
(b) fairness and quality of life at work, tackling issues such as fair task assignment, fair work conditions, and on providing opportunities for growth; and
(c) economic mechanisms that incentivize quality and effectiveness for requester while maintaining a high level of quality and fairness for crowd performers (also known as workers).

Because quality, fairness and opportunities for crowd workers are central to our workshop, we will invite a diverse group of crowd workers from a global public crowdsourcing platform to our panel-led discussion.

Workshop web site: https://research.yandex.com/workshops/crowd/neurips-2020

Paper submission portal: https://easychair.org/conferences/?conf=neurips2020crowd

All submissions must be in PDF format. The page limit is up to eight (8) pages maximum for regular papers and four (4) pages for work-in-progress/vision papers. These limits are for main content pages, including all figures and tables. Additional pages containing appendices, acknowledgements, funding disclosures, and references are allowed. You must format your submission using the NeurIPS 2020 LaTeX style file which includes a “preprint” option for non-anonymous preprints posted online. The maximum file size for submissions is 50MB. Submissions that violate the NeurIPS style (e.g., by decreasing margins or font sizes) or page limits may be rejected without further review.

As an author, you are responsible for anonymizing your submission. In particular, you should not include author names, author affiliations, or acknowledgements in your submission and you should avoid providing any other identifying information.

Competition Track Friday

Hugo Jair Escalante, Katja Hofmann
2020-12-11T08:00:00-08:00 - 2020-12-11T17:45:00-08:00
First session for the competition program at NeurIPS2020.

Machine learning competitions have grown in popularity and impact over the last decade, emerging as an effective means to advance the state of the art by posing well-structured, relevant, and challenging problems to the community at large. Motivated by a reward or merely the satisfaction of seeing their machine learning algorithm reach the top of a leaderboard, practitioners innovate, improve, and tune their approach before evaluating on a held-out dataset or environment. The competition track of NeurIPS has matured in 2020, its fourth year, with a considerable increase in both the number of challenges and the diversity of domains and topics. A total of 16 competitions are featured this year as part of the track, with 8 competitions associated to each of the two days. The list of competitions that ar part of the program are available here:

https://neurips.cc/Conferences/2020/CompetitionTrack

Object Representations for Learning and Reasoning

William Agnew, Rim Assouel, Michael Chang, Antonia Creswell, Eliza Kosoy, Aravind Rajeswaran, Sjoerd van Steenkiste
2020-12-11T08:00:00-08:00 - 2020-12-11T19:15:00-08:00
Recent advances in deep reinforcement learning and robotics have enabled agents to achieve superhuman performance on a variety of challenging games and learn complex manipulation tasks. While these results are very promising, several open problems remain. In order to function in real-world environments, learned policies must be both robust to input perturbations and be able to rapidly generalize or adapt to novel situations. Moreover, to collaborate and live with humans in these environments, the goals and actions of embodied agents must be interpretable and compatible with human representations of knowledge. Hence, it is natural to consider how humans so successfully perceive, learn, and plan to build agents that are equally successful at solving real world tasks.
There is much evidence to suggest that objects are a core level of abstraction at which humans perceive and understand the world [8]. Objects have the potential to provide a compact, casual, robust, and generalizable representation of the world. Recently, there have been many advancements in scene representation, allowing scenes to be represented by their constituent objects, rather than at the level of pixels. While these works have shown promising results, there is still a lack of agreement on how to best represent objects, how to learn object representations, and how best to leverage them in agent training.
In this workshop we seek to build a consensus on what object representations should be by engaging with researchers from developmental psychology and by defining concrete tasks and capabilities that agents building on top of such abstract representations of the world should succeed at. We will discuss how object representations may be learned through invited presenters with expertise both in unsupervised and supervised object representation learning methods. Finally, we will host conversations and research on new frontiers in object learning.

Deep Reinforcement Learning

Pieter Abbeel, Chelsea Finn, Joelle Pineau, David Silver, Satinder Singh, Coline Devin, Misha Laskin, Kimin Lee, Janarthanan Rajendran, Vivek Veeriah
2020-12-11T08:30:00-08:00 - 2020-12-11T19:00:00-08:00
In recent years, the use of deep neural networks as function approximators has enabled researchers to extend reinforcement learning techniques to solve increasingly complex control tasks. The emerging field of deep reinforcement learning has led to remarkable empirical results in rich and varied domains like robotics, strategy games, and multiagent interactions. This workshop will bring together researchers working at the intersection of deep learning and reinforcement learning, and it will help interested researchers outside of the field gain a high-level view about the current state of the art and potential directions for future contributions.

ML Retrospectives, Surveys & Meta-Analyses (ML-RSA)

Chhavi Yadav, Prabhu Pradhan, Abhishek Gupta, Jesse Dodge, Mayoore Jaiswal, Peter Henderson, Ryan Lowe, Jessica Forde Jessica Forde
2020-12-11T08:30:00-08:00 - 2020-12-11T21:00:00-08:00
The exponential growth of AI research has led to several papers floating on arxiv, making it difficult to review existing literature. Despite the huge demand, the proportion of survey & analyses papers published is very low due to reasons like lack of a venue and incentives. Our Workshop, ML-RSA provides a platform and incentivizes writing such types of papers. It meets the need of taking a step back, looking at the sub-field as a whole and evaluating actual progress. We will accept 3 types of papers: broad survey papers, meta-analyses, and retrospectives. Survey papers will mention and cluster different types of approaches, provide pros and cons, highlight good source code implementations, applications and emphasize impactful literature. We expect this type of paper to provide a detailed investigation of the techniques and link together themes across multiple works. The main aim of these will be to organize techniques and lower the barrier to entry for newcomers. Meta-Analyses, on the other hand, are forward-looking, aimed at providing critical insights on the current state-of-affairs of a sub-field and propose new directions based on them. These are expected to be more than just an ablation study -- though an empirical analysis is encouraged as it can provide for a stronger narrative. Ideally, they will seek to showcase trends that are not possible to be seen when looking at individual papers. Finally, retrospectives seek to provide further insights ex post by the authors of a paper: these could be technical, insights into the research process, or other helpful information that isn’t apparent from the original work.

BabyMind: How Babies Learn and How Machines Can Imitate

Byoung-Tak Zhang, Gary Marcus, Angelo Cangelosi, Pia Knoeferle, Klaus Obermayer, David Vernon, Chen Yu
2020-12-11T08:40:00-08:00 - 2020-12-11T17:30:00-08:00
Deep neural network models have shown remarkable performance in tasks such as visual object recognition, speech recognition, and autonomous robot control. We have seen continuous improvements throughout the years which have led to these models surpassing human performance in a variety of tasks such as image classification, video games, and board games. However, the performance of deep learning models heavily relies on a massive amount of data, which requires huge time and effort to collect and label them.

Recently, to overcome these weaknesses and limitations, attention has shifted towards machine learning paradigms such as semi-supervised learning, incremental learning, and meta-learning which aim to be more data-efficient. However, these learning models still require a huge amount of data to achieve high performance on real-world problems. There has been only a few achievement or breakthrough, especially in terms of the ability to grasp abstract concepts and to generalize problems.

In contrast, human babies gradually make sense of the environment through their experiences, a process known as learning by doing, without a large amount of labeled data. They actively engage with their surroundings and explore the world through their own interactions. They gradually acquire the abstract concept of objects and develop the ability to generalize problems. Thus, if we understand how a baby's mind develops, we can imitate those learning processes in machines and thereby solve previously unsolved problems such as domain generalization and overcoming the stability-plasticity dilemma. In this workshop, we explore how these learning mechanisms can help us build human-level intelligence in machines.

In this interdisciplinary workshop, we bring together eminent researchers in Computer Science, Cognitive Science, Psychology, Brain Science, Developmental Robotics and various other related fields to discuss the below questions on babies vs. machines.

■ How far is the state-of-the-art machine intelligence from babies?
■ How does a baby learn from their own interactions and experiences?
■ What sort of insights can we acquire from the baby's mind?
■ How can those insights help us build smart machines with baby-like intelligence?
■ How can machines learn from babies to do better?
■ How can these machines further contribute to solving the real-world problems?

We will invite selected experts in the related fields to give insightful talks. We will also encourage interdisciplinary contributions from researchers in the above topics. Hence, we expect this workshop to be a good starting point for participants in various fields to discuss theoretical fundamentals, open problems, and major directions of further development in an exciting new area.

KR2ML - Knowledge Representation and Reasoning Meets Machine Learning

Veronika Thost, Kartik Talamadupula, Vivek Srikumar, Chenwei Zhang, Josh Tenenbaum
2020-12-11T08:40:00-08:00 - 2020-12-11T17:25:00-08:00
Machine learning (ML) has seen a tremendous amount of recent success and has been applied in a variety of applications. However, it comes with several drawbacks, such as the need for large amounts of training data and the lack of explainability and verifiability of the results. In many domains, there is structured knowledge (e.g., from electronic health records, laws, clinical guidelines, or common sense knowledge) which can be leveraged for reasoning in an informed way (i.e., including the information encoded in the knowledge representation itself) in order to obtain high quality answers. Symbolic approaches for knowledge representation and reasoning (KRR) are less prominent today - mainly due to their lack of scalability - but their strength lies in the verifiable and interpretable reasoning that can be accomplished. The KR2ML workshop aims at the intersection of these two subfields of AI. It will shine a light on the synergies that (could/should) exist between KRR and ML, and will initiate a discussion about the key challenges in the field.

Machine Learning for Economic Policy

Stephan Zheng, Alex Trott, Annie Liang, Jamie Morgenstern, David Parkes, Nika Haghtalab
2020-12-11T09:00:00-08:00 - 2020-12-11T16:00:00-08:00
www.mlforeconomicpolicy.com
mlforeconomicpolicy.neurips2020@gmail.com

The goal of this workshop is to inspire and engage a broad interdisciplinary audience, including computer scientists, economists, and social scientists, around topics at the exciting intersection of economics, public policy, and machine learning. We feel that machine learning offers enormous potential to transform our understanding of economics, economic decision making, and public policy, and yet its adoption by economists and social scientists remains nascent.

We want to use the workshop to expose some of the critical socio-economic issues that stand to benefit from applying machine learning, expose underexplored economic datasets and simulations, and identify machine learning research directions that would have significant positive socio-economic impact. In effect, we aim to accelerate the use of machine learning to rapidly develop, test, and deploy fair and equitable economic policies that are grounded in representative data.

For example, we would like to explore questions around whether machine learning can be used to help with the development of effective economic policy, to understand economic behavior through granular, economic data sets, to automate economic transactions for individuals, and how we can build rich and faithful simulations of economic systems with strategic agents. We would like to develop economic policies and mechanisms that target socio-economic issues including diversity and fair representation in economic outcomes, economic equality, and improving economic opportunity. In particular, we want to highlight both the opportunities as well as the barriers to adoption of ML in economics.

Algorithmic Fairness through the Lens of Causality and Interpretability

Awa Dieng, Jessica Schrouff, Matt J Kusner, Golnoosh Farnadi, Fernando Diaz
2020-12-12T01:00:00-08:00 - 2020-12-12T12:10:00-08:00
Black-box machine learning models have gained widespread deployment in decision-making settings across many parts of society, from sentencing decisions to medical diagnostics to loan lending. However, many models were found to be biased against certain demographic groups. Initial work on Algorithmic fairness focused on formalizing statistical measures of fairness, that could be used to train new classifiers. While these models were an important first step towards addressing fairness concerns, there were immediate challenges with them. Causality has recently emerged as a powerful tool to address these shortcomings. Causality can be seen as a model-first approach: starting with the language of structural causal models or potential outcomes, the idea is to frame, then solve questions of algorithmic fairness in this language. Such causal definitions of fairness can have far-reaching impact, especially in high risk domains. Interpretability on the other hand can be viewed as a user-first approach: can the ways in which algorithms work be made more transparent, making it easier for them to align with our societal values on fairness? In this way, Interpretability can sometimes be more actionable than Causality work.

Given these initial successes, this workshop aims to more deeply investigate how open questions in algorithmic fairness can be addressed with Causality and Interpretability. Questions such as: What improvements can causal definitions provide compared to existing statistical definitions of fairness? How can causally grounded methods help develop more robust fairness algorithms in practice? What tools for interpretability are useful for detecting bias and building fair systems? What are good formalizations of interpretability when addressing fairness questions?

Website: www.afciworkshop.org

Medical Imaging Meets NeurIPS

Jonas Teuwen, Marleen de Bruijne, Qi Dou, Ben Glocker, Ipek Oguz, Aasa Feragen, Hervé Lombaert, Ender Konukoglu
2020-12-12T02:30:00-08:00 - 2020-12-12T11:25:00-08:00
'Medical Imaging meets NeurIPS' is a satellite workshop established in 2017. The workshop aims to bring researchers together from the medical image computing and machine learning communities. The objective is to discuss the major challenges in the field and opportunities for joining forces. This year the workshop will feature online oral and poster sessions with an emphasis on audience interactions. In addition, there will be a series of high-profile invited speakers from industry, academia, engineering and medical sciences giving an overview of recent advances, challenges, latest technology and efforts for sharing clinical data.

Medical imaging is facing a major crisis with an ever increasing complexity and volume of data and immense economic pressure. The interpretation of medical images pushes human abilities to the limit with the risk that critical patterns of disease go undetected. Machine learning has emerged as a key technology for developing novel tools in computer aided diagnosis, therapy and intervention. Still, progress is slow compared to other fields of visual recognition which is mainly due to the domain complexity and constraints in clinical applications which require most robust, accurate, and reliable solutions. The workshop aims to raise the awareness of the unmet needs in machine learning for successful applications in medical imaging.

Machine Learning for the Developing World (ML4D): Improving Resilience

Tejumade Afonja, Konstantin Klemmer, Niveditha Kalavakonda, Femi (Oluwafemi) Azeez, Aya Salama, Paula Rodriguez Diaz
2020-12-12T04:00:00-08:00 - 2020-12-12T14:00:00-08:00
A few months ago, the world was shaken by the outbreak of the novel Coronavirus, exposing the lack of preparedness for such a case in many nations around the globe. As we watched the daily number of cases of the virus rise exponentially, and governments scramble to design appropriate policies, communities collectively asked “Could we have been better prepared for this?” Similar questions have been brought up by the climate emergency the world is now facing.
At a time of global reckoning, this year’s ML4D program will focus on building and improving resilience in developing regions through machine learning. Past iterations of the workshop have explored how machine learning can be used to tackle global development challenges, the potential benefits of such technologies, as well as the associated risks and shortcomings. This year we seek to ask our community to go beyond solely tackling existing problems by building machine learning tools with foresight, anticipating application challenges, and providing sustainable, resilient systems for long-term use.
This one-day workshop will bring together a diverse set of participants from across the globe. Attendees will learn about how machine learning tools can help enhance preparedness for disease outbreaks, address the climate crisis, and improve countries’ ability to respond to emergencies. It will also discuss how naive “tech solutionism” can threaten resilience by posing risks to human rights, enabling mass surveillance, and perpetuating inequalities. The workshop will include invited talks, contributed talks, a poster session of accepted papers, breakout sessions tailored to the workshop’s theme, and panel discussions.

Biological and Artificial Reinforcement Learning

Raymond Chua, Feryal Behbahani, Julie J Lee, Sara Zannone, Rui Ponte Costa, Blake Richards , Ida Momennejad, Doina Precup
2020-12-12T04:30:00-08:00 - 2020-12-12T15:45:00-08:00
Reinforcement learning (RL) algorithms learn through rewards and a process of trial-and-error. This approach is strongly inspired by the study of animal behaviour and has led to outstanding achievements. However, artificial agents still struggle with a number of difficulties, such as learning in changing environments and over longer timescales, states abstractions, generalizing and transferring knowledge. Biological agents, on the other hand, excel at these tasks. The first edition of our workshop last year brought together leading and emerging researchers from Neuroscience, Psychology and Machine Learning to share how neural and cognitive mechanisms can provide insights for RL research and how machine learning advances can further our understanding of brain and behaviour. This year, we want to build on the success of our previous workshop, by expanding on the challenges that emerged and extending to novel perspectives. The problem of state and action representation and abstraction emerged quite strongly last year, so this year’s program aims to add new perspectives like hierarchical reinforcement learning, structure learning and their biological underpinnings. Additionally, we will address learning over long timescales, such as lifelong learning or continual learning, by including views from synaptic plasticity and developmental neuroscience. We are hoping to inspire and further develop connections between biological and artificial reinforcement learning by bringing together experts from all sides and encourage discussions that could help foster novel solutions for both communities.

I Can’t Believe It’s Not Better! Bridging the gap between theory and empiricism in probabilistic machine learning

Jessica Forde, Francisco Ruiz, Melanie Fernandez Pradier, Aaron Schein, Finale Doshi-Velez, Isabel Valera, David Blei, Hanna Wallach
2020-12-12T04:45:00-08:00 - 2020-12-12T14:45:00-08:00
We’ve all been there. A creative spark leads to a beautiful idea. We love the idea, we nurture it, and name it. The idea is elegant: all who hear it fawn over it. The idea is justified: all of the literature we have read supports it. But, lo and behold: once we sit down to implement the idea, it doesn’t work. We check our code for software bugs. We rederive our derivations. We try again and still, it doesn’t work. We Can’t Believe It’s Not Better [1].

In this workshop, we will encourage probabilistic machine learning researchers who Can’t Believe It’s Not Better to share their beautiful idea, tell us why it should work, and hypothesize why it does not in practice. We also welcome work that highlights pathologies or unexpected behaviors in well-established practices. This workshop will stress the quality and thoroughness of the scientific procedure, promoting transparency, deeper understanding, and more principled science.

Focusing on the probabilistic machine learning community will facilitate this endeavor, not only by gathering experts that speak the same language, but also by exploiting the modularity of probabilistic framework. Probabilistic machine learning separates modeling assumptions, inference, and model checking into distinct phases [2]; this facilitates criticism when the final outcome does not meet prior expectations. We aim to create an open-minded and diverse space for researchers to share unexpected or negative results and help one another improve their ideas.

Machine Learning for Engineering Modeling, Simulation and Design

Alex Beatson, Priya Donti, Amira Abdel-Rahman, Stephan Hoyer, Rose Yu, J. Zico Kolter, Ryan Adams
2020-12-12T04:50:00-08:00 - 2020-12-12T15:00:00-08:00
For full details see: https://ml4eng.github.io/

Modern engineering workflows are built on computational tools for specifying models and designs, for numerical analysis of system behavior, and for optimization, model-fitting and rational design. How can machine learning be used to empower the engineer and accelerate this workflow? We wish to bring together machine learning researchers and engineering academics to address the problem of developing ML tools which benefit engineering modeling, simulation and design, through reduction of required computational or human effort, through permitting new rich design spaces, through enabling production of superior designs, or through enabling new modes of interaction and new workflows.

Machine Learning for Creativity and Design 4.0

Luba Elliott, Sander Dieleman, Adam Roberts, Tom White, Daphne Ippolito, Holly Grimm, Mattie Tesfaldet, Samaneh Azadi
2020-12-12T05:15:00-08:00 - 2020-12-12T15:00:00-08:00
Generative machine learning and machine creativity have continued to grow and attract a wider audience to machine learning. Generative models enable new types of media creation across images, music, and text - including recent advances such as StyleGAN2, Jukebox and GPT-3. This one-day workshop broadly explores issues in the applications of machine learning to creativity and design. We will look at algorithms for generation and creation of new media, engaging researchers building the next generation of generative models (GANs, RL, etc). We investigate the social and cultural impact of these new models, engaging researchers from HCI/UX communities and those using machine learning to develop new creative tools. In addition to covering the technical advances, we also address the ethical concerns ranging from the use of biased datasets to replicating artistic work. Finally, we’ll hear from some of the artists and musicians who are adopting machine learning including deep learning and reinforcement learning as part of their own artistic process. We aim to balance the technical issues and challenges of applying the latest generative models to creativity and design with philosophical and cultural issues that surround this area of research.

Cooperative AI

Thore Graepel, Dario Amodei, Vincent Conitzer, Allan Dafoe, Gillian Hadfield, Eric Horvitz, Sarit Kraus, Kate Larson, Yoram Bachrach
2020-12-12T05:20:00-08:00 - 2020-12-12T12:55:00-08:00
https://www.CooperativeAI.com/

Problems of cooperation—in which agents seek ways to jointly improve their welfare—are ubiquitous and important. They can be found at all scales ranging from our daily routines—such as highway driving, communication via shared language, division of labor, and work collaborations—to our global challenges—such as disarmament, climate change, global commerce, and pandemic preparedness. Arguably, the success of the human species is rooted in our ability to cooperate, in our social intelligence and skills. Since machines powered by artificial intelligence and machine learning are playing an ever greater role in our lives, it will be important to equip them with the skills necessary to cooperate and to foster cooperation.

We see an opportunity for the field of AI, and particularly machine learning, to explicitly focus effort on this class of problems which we term Cooperative AI. The goal of this research would be to study the many aspects of the problem of cooperation, and innovate in AI to contribute to solving these problems. Central questions include how to build machine agents with the capabilities needed for cooperation, and how advances in machine learning can help foster cooperation in populations of agents (of machines and/or humans), such as through improved mechanism design and mediation.

Research could be organized around key capabilities necessary for cooperation, including: understanding other agents, communicating with other agents, constructing cooperative commitments, and devising and negotiating suitable bargains and institutions. Since artificial agents will often act on behalf of particular humans and in ways that are consequential for humans, this research will need to consider how machines can adequately learn human preferences, and how best to integrate human norms and ethics into cooperative arrangements.

We are planning to bring together scholars from diverse backgrounds to discuss how AI research can contribute to the field of cooperation.


Call for Papers
We invite high-quality paper submissions on the following topics (broadly construed, this is not an exhaustive list):

-Multi-agent learning
-Agent cooperation
-Agent communication
-Resolving commitment problems
-Agent societies, organizations and institutions
-Trust and reputation
-Theory of mind and peer modelling
-Markets, mechanism design and and economics based cooperation
-Negotiation and bargaining agents
-Team formation problems

Accepted papers will be presented during joint virtual poster sessions and be made publicly available as non archival reports, allowing future submissions to archival conferences or journals.

Submissions should be up to eight pages excluding references, acknowledgements, and supplementary material, and should follow NeurIPS format. The review process will be double-blind.

Paper submissions: https://easychair.org/my/conference?conf=coopai2020#

International Workshop on Scalability, Privacy, and Security in Federated Learning (SpicyFL 2020)

Xiaolin Andy Li, Dejing Dou, Ameet Talwalkar, Hongyu Li, Jianzong Wang, Yanzhi Wang
2020-12-12T05:20:00-08:00 - 2020-12-12T16:10:00-08:00
In the recent decade, we have witnessed rapid progress in machine learning in general and deep learning in particular, mostly driven by tremendous data. As these intelligent algorithms, systems, and applications are deployed in real-world scenarios, we are now facing new challenges, such as scalability, security, privacy, trust, cost, regulation, and environmental and societal impacts. In the meantime, data privacy and ownership has become more and more critical in many domains, such as finance, health, government, and social networks. Federated learning (FL) has emerged to address data privacy issues. To make FL practically scalable, useful, efficient, and effective on security and privacy mechanisms and policies, it calls for joint efforts from the community, academia, and industry. More challenges, interplays, and tradeoffs in scalability, privacy, and security need to be investigated in a more holistic and comprehensive manner by the community. We are expecting broader, deeper, and greater evolution of these concepts and technologies, and confluence towards holistic trustworthy AI ecosystems.

This workshop provides an open forum for researchers, practitioners, and system builders to exchange ideas, discuss, and shape roadmaps towards scalable and privacy-preserving federated learning in particular, and scalable and trustworthy AI ecosystems in general.

Navigating the Broader Impacts of AI Research

Carolyn Ashurst, Rosie Campbell, Deborah Raji, Solon Barocas, Stuart Russell
2020-12-12T05:30:00-08:00 - 2020-12-12T15:00:00-08:00
Following growing concerns with both harmful research impact and research conduct in computer science, including concerns with research published at NeurIPS, this year’s conference introduced two new mechanisms for ethical oversight: a requirement that authors include a “broader impact statement” in their paper submissions and additional evaluation criteria asking paper reviewers to identify any potential ethical issues with the submissions.

These efforts reflect a recognition that existing research norms have failed to address the impacts of AI research, and take place against the backdrop of a larger reckoning with the role of AI in perpetuating injustice. The changes have been met with both praise and criticism some within and outside the community see them as a crucial first step towards integrating ethical reflection and review into the research process, fostering necessary changes to protect populations at risk of harm. Others worry that AI researchers are not well placed to recognize and reason about the potential impacts of their work, as effective ethical deliberation may require different expertise and the involvement of other stakeholders.

This debate reveals that even as the AI research community is beginning to grapple with the legitimacy of certain research questions and critically reflect on its research practices, there remains many open questions about how to ensure effective ethical oversight. This workshop therefore aims to examine how concerns with harmful impacts should affect the way the research community develops its research agendas, conducts its research, evaluates its research contributions, and handles the publication and dissemination of its findings. This event complements other NeurIPS workshops this year devoted to normative issues in AI and builds on others from years past, but adopts a distinct focus on the ethics of research practice and the ethical obligations of researchers.

Machine Learning for Molecules

Jose Miguel Hernández-Lobato, Matt Kusner, Brooks Paige, Marwin Segler, Jennifer Wei
2020-12-12T05:30:00-08:00 - 2020-12-12T13:00:00-08:00
Discovering new molecules and materials is a central pillar of human well-being, providing new medicines, securing the world’s food supply via agrochemicals, or delivering new battery or solar panel materials to mitigate climate change. However, the discovery of new molecules for an application can often take up to a decade, with costs spiraling. Machine learning can help to accelerate the discovery process. The goal of this workshop is to bring together researchers interested in improving applications of machine learning for chemical and physical problems and industry experts with practical experience in pharmaceutical and agricultural development. In a highly interactive format, we will outline the current frontiers and present emerging research directions. We aim to use this workshop as an opportunity to establish a common language between all communities, to actively discuss new research problems, and also to collect datasets by which novel machine learning models can be benchmarked. The program is a collection of invited talks, alongside contributed posters. A panel discussion will provide different perspectives and experiences of influential researchers from both fields and also engage open participant conversation. An expected outcome of this workshop is the interdisciplinary exchange of ideas and initiation of collaboration.

Beyond BackPropagation: Novel Ideas for Training Neural Architectures

Mateusz Malinowski, Grzegorz Swirszcz, Viorica Patraucean, Marco Gori, Yanping Huang, Sindy Löwe, Anna Choromanska
2020-12-12T06:00:00-08:00 - 2020-12-12T16:30:00-08:00
Is backpropagation the ultimate tool on the path to achieving synthetic intelligence as its success and widespread adoption would suggest?

Many have questioned the biological plausibility of backpropagation as a learning mechanism since its discovery. The weight transport and timing problems are the most disputable. The same properties of backpropagation training also have practical consequences. For instance, backpropagation training is a global and coupled procedure that limits the amount of possible parallelism and yields high latency.

These limitations have motivated us to discuss possible alternative directions. In this workshop, we want to promote such discussions by bringing together researchers from various but related disciplines, and to discuss possible solutions from engineering, machine learning and neuroscientific perspectives.

Wordplay: When Language Meets Games

Prithviraj Ammanabrolu, Matthew Hausknecht, Eric Yuan, Marc-Alexandre Côté, Adam Trischler, Kory Mathewson, John Urbanek, Jason Weston, Mark Riedl
2020-12-12T06:00:00-08:00 - 2020-12-12T15:00:00-08:00
This workshop will focus on exploring the utility of interactive narratives to fill a role as the learning environments of choice for language-based tasks including but not limited to storytelling. A previous iteration of this workshop took place very successfully with over a hundred attendees, also at NeurIPS, in 2018 and since then the community of people working in this area has rapidly increased. This workshop aims to be a centralized place where all researchers involved across a breadth of fields can interact and learn from each other. Furthermore, it will act as a showcase to the wider NLP/RL/Game communities on interactive narrative's place as a learning environment. The program will feature a collection of invited talks in addition to contributed talks and posters from each of these sections of the interactive narrative community and the wider NLP and RL communities.

MLPH: Machine Learning in Public Health

Rumi Chunara, Abraham Flaxman, Daniel Lizotte, Chirag J Patel, Laura Rosella
2020-12-12T06:00:00-08:00 - 2020-12-12T14:00:00-08:00
Public health and population health refer to the study of daily life factors and prevention efforts, and their effects on the health of populations. We expect that work featured in this workshop will differ from Machine Learning in Healthcare as it will focus on data and algorithms related to the non-medical conditions that shape our health including structural, lifestyle, policy, social, behavior and environmental factors. Indeed, much of the data that is traditionally used in machine learning and health problems are really about our interactions with the health care system, and this workshop aims to balance this with machine learning work using data on the non-medical conditions that shape our health. There are many machine learning opportunities specific to these data and how they are used to assess and understand health and disease, that differ from healthcare specific data and tasks (e.g. the data is often unstructured, must be captured across the life-course, in different environments, etc.) This is pertinent for both infectious diseases such as COVID-19 and non-communicable diseases such as diabetes, stroke, etc. Indeed, this workshop topic is especially timely given the COVID outbreak, protests regarding racism, and associated interest in exploring relevance of machine learning to questions around disease incidence, prevention and mitigation related to both of these and their synergy. These questions require the use of data from outside of healthcare, as well as considerations of how machine learning can augment work in epidemiology and biostatistics.

Interpretable Inductive Biases and Physically Structured Learning

Michael Lutter, Alexander Terenin, Shirley Ho, Lei Wang
2020-12-12T06:30:00-08:00 - 2020-12-12T14:30:00-08:00
Over the last decade, deep networks have propelled machine learning to accomplish tasks previously considered far out of reach, human-level performance in image classification and game-playing. However, research has also shown that the deep networks are often brittle to distributional shifts in data: it has been shown that human-imperceptible changes can lead to absurd predictions. In many application areas, including physics, robotics, social sciences and life sciences, this motivates the need for robustness and interpretability, so that deep networks can be trusted in practical applications. Interpretable and robust models can be constructed by incorporating prior knowledge within the model or learning process as an inductive bias, thereby regularizing the model, avoiding overfitting, and making the model easier to understand for scientists who are non-machine-learning experts. Already in the last few years researchers from different fields have proposed various combinations of domain knowledge and machine learning and successfully applied these techniques to various applications.

AI for Earth Sciences

Karthik Mukkavilli, Johanna Hansen, Natasha Dudek, Tom Beucler, Kelly Kochanski, Mayur Mudigonda, Karthik Kashinath, Amy McGovern, Paul D Miller, Chad Frischmann, Pierre Gentine, Gregory Dudek, Aaron Courville, Daniel Kammen, Vipin Kumar
2020-12-12T06:45:00-08:00 - 2020-12-12T21:00:00-08:00
Our workshop proposal AI for Earth sciences seeks to bring cutting edge geoscientific and planetary challenges to the fore for the machine learning and deep learning communities. We seek machine learning interest from major areas encompassed by Earth sciences which include, atmospheric physics, hydrologic sciences, cryosphere science, oceanography, geology, planetary sciences, space weather, volcanism, seismology, geo-health (i.e. water, land, air pollution, environmental epidemics), biosphere, and biogeosciences. We also seek interest in AI applied to energy for renewable energy meteorology, thermodynamics and heat transfer problems. We call for papers demonstrating novel machine learning techniques in remote sensing for meteorology and geosciences, generative Earth system modeling, and transfer learning from geophysics and numerical simulations and uncertainty in Earth science learning representations. We also seek theoretical developments in interpretable machine learning in meteorology and geoscientific models, hybrid models with Earth science knowledge guided machine learning, representation learning from graphs and manifolds in spatiotemporal models and dimensionality reduction in Earth sciences. In addition, we seek Earth science applications from vision, robotics, multi-agent systems and reinforcement learning. New labelled benchmark datasets and generative visualizations of the Earth are also of particular interest. A new area of interest is in integrated assessment models and human-centered AI for Earth.


AI4Earth Areas of Interest:
- Atmospheric Science
- Hydro and Cryospheres
- Solid Earth
- Theoretical Advances
- Remote Sensing
- Energy in the Earth system
- Extreme weather & climate
- Geo-health
- Biosphere & Biogeosciences
- Planetary sciences
- Benchmark datasets
- People-Earth

Talking to Strangers: Zero-Shot Emergent Communication

Marie Ossenkopf, Angelos Filos, Abhinav Gupta, Michael Noukhovitch, Angeliki Lazaridou, Jakob Foerster, Kalesha Bullard, Rahma Chaabouni, Eugene Kharitonov, Roberto Dessì
2020-12-12T07:00:00-08:00 - 2020-12-12T14:10:00-08:00
(EST)
10.10 - 10.40 **Ruth Byrne** (TCD) - How people make inferences about other people's inferences
14.00 - 14.30 **Michael Bowling** (University of Alberta) - Zero-Shot Coordination
14.30 - 15.00 **Richard Futrell** (UCI) - Information-theoretic models of natural language

Communication is one of the most impressive human abilities but historically it has been studied in machine learning mainly on confined datasets of natural language. Thanks to deep RL, emergent communication can now be studied in complex multi-agent scenarios.

Three previous successful workshops (2017-2019) have gathered the community to discuss how, when, and to what end communication emerges, producing research later published at top ML venues (e.g., ICLR, ICML, AAAI). However, many approaches to studying emergent communication rely on extensive amounts of shared training time. Our question is: Can we do that faster?

Humans interact with strangers on a daily basis. They possess a basic shared protocol, but a huge partition is nevertheless defined by the context. Humans are capable of adapting their shared protocol to ever new situations and general AI would need this capability too.

We want to explore the possibilities for artificial agents of evolving ad hoc communication spontaneously, by interacting with strangers. Since humans excel on this task, we want to start by having the participants of the workshop take the role of their agents and develop their own bots for an interactive game. This will illuminate the necessities of zero-shot communication learning in a practical way and form a base of understanding to build algorithms upon. The participants will be split into groups and will have one hour to develop their bots. Then, a round-robin tournament will follow, where bots will play an iterated zero-shot communication game with other teams’ bots.

This interactive approach is especially aimed at the defined NeurIPS workshop goals to clarify questions for a subfield or application area and to crystallize common problems. It condenses our experience from former workshops on how workshop design can facilitate cooperation and progress in the field. We also believe that this will maximize the interactions and exchange of ideas between our community.

Machine Learning for Mobile Health

Joe Futoma, Walter Dempsey, Katherine Heller, Yi-An Ma, Nicholas Foti, Marianne Njifon, Kelly Zhang, Hera Shi
2020-12-12T07:00:00-08:00 - 2020-12-12T14:30:00-08:00
Mobile health (mHealth) technologies have transformed the mode and quality of clinical research. Wearable sensors and mobile phones provide real-time data streams that support automated clinical decision making, allowing researchers and clinicians to provide ecological and in-the-moment support to individuals in need. Mobile health technologies are used across various health fields. Their inclusion in clinical care has aimed to improve HIV medication adherence, to increase activity, supplement counseling/pharmacotherapy in treatment for substance use, reinforce abstinence in addictions, and to support recovery from alcohol dependence. The development of mobile health technologies, however, has progressed at a faster pace than the science and methodology to evaluate their validity and efficacy.

Current mHealth technologies are limited in their ability to understand how adverse health behaviors develop, how to predict them, and how to encourage healthy behaviors. In order for mHealth to progress and have expanded impact, the field needs to facilitate collaboration among machine learning researchers, statisticians, mobile sensing researchers, human-computer interaction researchers, and clinicians. Techniques from multiple fields can be brought to bear on the substantive problems facing this interdisciplinary discipline: experimental design, causal inference, multi-modal complex data analytics, representation learning, reinforcement learning, deep learning, transfer learning, data visualization, and clinical integration.

This workshop will assemble researchers from the key areas in this interdisciplinary space necessary to better address the challenges currently facing the widespread use of mobile health technologies.

Shared Visual Representations in Human and Machine Intelligence (SVRHM)

Arturo Deza, Joshua Peterson, N Apurva Ratan Murty, Tom Griffiths
2020-12-12T07:50:00-08:00 - 2020-12-12T17:10:00-08:00
https://twitter.com/svrhm2020 The goal of the 2nd Shared Visual Representations in Human and Machine Intelligence (SVRHM) workshop is to disseminate relevant, parallel findings in the fields of computational neuroscience, psychology, and cognitive science that may inform modern machine learning. In the past few years, machine learning methods---especially deep neural networks---have widely permeated the vision science, cognitive science, and neuroscience communities. As a result, scientific modeling in these fields has greatly benefited, producing a swath of potentially critical new insights into the human mind. Since human performance remains the gold standard for many tasks, these cross-disciplinary insights and analytical tools may point towards solutions to many of the current problems that machine learning researchers face (e.g., adversarial attacks, compression, continual learning, and self-supervised learning). Thus we propose to invite leading cognitive scientists with strong computational backgrounds to disseminate their findings to the machine learning community with the hope of closing the loop by nourishing new ideas and creating cross-disciplinary collaborations. In particular, this year's version of the workshop will have a heavy focus on the relative roles of larger datasets and stronger inductive biases as we work on tasks that go beyond object recognition.

Consequential Decisions in Dynamic Environments

Niki Kilbertus, Angela Zhou, Ashia Wilson, John Miller, Lily Hu, Lydia T. Liu, Nathan Kallus, Shira Mitchell
2020-12-12T08:00:00-08:00 - 2020-12-12T15:50:00-08:00
Machine learning is rapidly becoming an integral component of sociotechnical systems. Predictions are increasingly used to grant beneficial resources or withhold opportunities, and the consequences of such decisions induce complex social dynamics by changing agent outcomes and prompting individuals to proactively respond to decision rules. This introduces challenges for standard machine learning methodology. Static measurements and training sets poorly capture the complexity of dynamic interactions between algorithms and humans. Strategic adaptation to decision rules can render statistical regularities obsolete. Correlations momentarily observed in data may not be robust enough to support interventions for long-term welfaremits of traditional, static approaches to decision-making, researchers in fields ranging from public policy to computer science to economics have recently begun to view consequential decision-making through a dynamic lens. This workshop will confront the use of machine learning to make consequential decisions in dynamic environments. Work in this area sits at the nexus of several different fields, and the workshop will provide an opportunity to better understand and synthesize social and technical perspectives on these issues and catalyze conversations between researchers and practitioners working across these diverse areas.

Second Workshop on AI for Humanitarian Assistance and Disaster Response

Ritwik Gupta, Robin Murphy, Eric Heim, Zhangyang Wang, Bryce Goodman, Nirav Patel, Piotr Bilinski, Edoardo Nemni
2020-12-12T08:00:00-08:00 - 2020-12-12T18:00:00-08:00
Natural disasters are one of the oldest threats to both individuals and the societies they co-exist in. As a result, humanity has ceaselessly sought way to provide assistance to people in need after disasters have struck. Further, natural disasters are but a single, extreme example of the many possible humanitarian crises. Disease outbreak, famine, and oppression against disadvantaged groups can pose even greater dangers to people that have less obvious solutions. In this proposed workshop, we seek to bring together the Artificial Intelligence (AI) and Humanitarian Assistance and Disaster Response (HADR) communities in order to bring AI to bear on real-world humanitarian crises. Through this workshop, we intend to establish meaningful dialogue between the communities.

By the end of the workshop, the NeurIPS research community can come to understand the practical challenges of aiding those who are experiencing crises, while the HADR community can understand the landscape that is the state of art and practice in AI. Through this, we seek to begin establishing a pipeline of transitioning the research created by the NeurIPS community to real-world humanitarian issues.

Machine Learning for Structural Biology

Raphael Townshend, Stephan Eismann, Ron Dror, Ellen Zhong, Namrata Anand, John Ingraham, Wouter Boomsma, Sergey Ovchinnikov, Roshan Rao, Per Greisen, Rachel Kolodny, Bonnie Berger
2020-12-12T08:00:00-08:00 - 2020-12-12T18:00:00-08:00
Spurred on by recent advances in neural modeling and wet-lab methods, structural biology, the study of the three-dimensional (3D) atomic structure of proteins and other macromolecules, has emerged as an area of great promise for machine learning. The shape of macromolecules is intrinsically linked to their biological function (e.g., much like the shape of a bike is critical to its transportation purposes), and thus machine learning algorithms that can better predict and reason about these shapes promise to unlock new scientific discoveries in human health as well as increase our ability to design novel medicines.

Moreover, fundamental challenges in structural biology motivate the development of new learning systems that can more effectively capture physical inductive biases, respect natural symmetries, and generalize across atomic systems of varying sizes and granularities. Through the Machine Learning in Structural Biology workshop, we aim to include a diverse range of participants and spark a conversation on the required representations and learning algorithms for atomic systems, as well as dive deeply into how to integrate these with novel wet-lab capabilities.

Competition Track Saturday

Hugo Jair Escalante, Katja Hofmann
2020-12-12T08:00:00-08:00 - 2020-12-12T17:45:00-08:00
Second session for the competition program at NeurIPS2020.

Machine learning competitions have grown in popularity and impact over the last decade, emerging as an effective means to advance the state of the art by posing well-structured, relevant, and challenging problems to the community at large. Motivated by a reward or merely the satisfaction of seeing their machine learning algorithm reach the top of a leaderboard, practitioners innovate, improve, and tune their approach before evaluating on a held-out dataset or environment. The competition track of NeurIPS has matured in 2020, its fourth year, with a considerable increase in both the number of challenges and the diversity of domains and topics. A total of 16 competitions are featured this year as part of the track, with 8 competitions associated to each of the two days. The list of competitions that ar part of the program are available here:

https://neurips.cc/Conferences/2020/CompetitionTrack

HAMLETS: Human And Model in the Loop Evaluation and Training Strategies

Divyansh Kaushik, Bhargavi Paranjape, Forough Arabshahi, Yanai Elazar, Yixin Nie, Max Bartolo, Polina Kirichenko, Pontus Lars Erik Saito Stenetorp, Mohit Bansal, Zachary Lipton, Douwe Kiela
2020-12-12T08:15:00-08:00 - 2020-12-12T20:00:00-08:00
Human involvement in AI system design, development, and evaluation is critical to ensure that the insights being derived are practical, and the systems built are meaningful, reliable, and relatable to those who need them. Humans play an integral role in all stages of machine learning development, be it during data generation, interactively teaching machines, or interpreting, evaluating and debugging models. With growing interest in such “human in the loop” learning, we aim to highlight new and emerging research opportunities for the ML community that arise from the evolving needs to design evaluation and training strategies for humans and models in the loop. The specific focus of this workshop is on emerging and under-explored areas of human- and model-in-the-loop learning, such as employing humans to seek richer forms of feedback for data than labels alone, learning from dynamic adversarial data collection with humans employed to find weaknesses in models, learning from human teachers instructing computers through conversation and/or demonstration, investigating the role of humans in model interpretability, and assessing social impact of ML systems. This workshop aims to bring together interdisciplinary researchers from academia and industry to discuss major challenges, outline recent advances, and facilitate future research in these areas.

The Challenges of Real World Reinforcement Learning

Daniel Mankowitz, Gabriel Dulac-Arnold, Shie Mannor, Omer Gottesman, Anusha Nagabandi, Doina Precup, Timothy A Mann, Gabe Dulac-Arnold
2020-12-12T08:30:00-08:00 - 2020-12-12T19:30:00-08:00
Reinforcement Learning (RL) has had numerous successes in recent years in solving complex problem domains. However, this progress has been largely limited to domains where a simulator is available or the real environment is quick and easy to access. This is one of a number of challenges that are bottlenecks to deploying RL agents on real-world systems. Two recent papers identify nine important challenges that, if solved, will take a big step towards enabling RL agents to be deployed to real-world systems (Dulac et. al. 2019, 2020).The goals of this workshop are four-fold: (1) Providing a forum for researchers in academia, industry researchers as well as industry practitioners from diverse backgrounds to discuss the challenges faced in real-world systems; (2) discuss and prioritize the nine research challenges. This includes determining which challenges we should focus on next, whether any new challenges should be added to the list or existing ones removed from this list; (3) Discuss problem formulations for the various challenges and critique these formulations or develop new ones. This is especially important for more abstract challenges such as explainability. We should also be asking ourselves whether the current Markov Decision Process (MDP) formulation is sufficient for solving these problems or whether modifications need to be made. (4) Discuss approaches to solving combinations of these challenges.

Workshop on Computer Assisted Programming (CAP)

Augustus Odena, Charles Sutton, Nadia Polikarpova, Josh Tenenbaum, Armando Solar-Lezama, Isil Dillig
2020-12-12T08:30:00-08:00 - 2020-12-12T16:10:00-08:00
There are many tasks that could be automated by writing computer programs, but most people don’t know how to program computers (this is the subject of program synthesis, the study of how to automatically write programs from user specifications). Building tools for doing computer-assisted-programming could thus improve the lives of many people (and it’s also a cool research problem!). There has been substantial recent interest in the ML community in the problem of automatically writing computer programs from user specifications, as evidenced by the increased volume of Program Synthesis submissions to ICML, ICLR, and NeurIPS.

Despite this recent work, a lot of exciting questions are still open, such as how to combine symbolic reasoning over programs with deep learning, how to represent programs and user specifications, and how to apply program synthesis within computer vision, robotics, and other control problems. There is also work to be done on fusing work done in the ML community with research on Programming Languages (PL) through collaboration between the ML and PL communities, and there remains the challenge of establishing benchmarks that allow for easy comparison and measurement of progress. The aim of the CAP workshop is to address these points. This workshop will bring together researchers in programming languages, machine learning, and related areas who are interested in program synthesis and other methods for automatically writing programs from a specification of intended behavior.

Self-Supervised Learning -- Theory and Practice

Pengtao Xie, Shanghang Zhang, Pulkit Agrawal, Ishan Misra, Cynthia Rudin, Abdelrahman Mohamed, Wenzhen Yuan, Barret Zoph, Laurens van der Maaten, Xingyi Yang, Eric Xing
2020-12-12T08:50:00-08:00 - 2020-12-12T18:40:00-08:00
Self-supervised learning (SSL) is an unsupervised approach for representation learning without relying on human-provided labels. It creates auxiliary tasks on unlabeled input data and learns representations by solving these tasks. SSL has demonstrated great success on images (e.g., MoCo, PIRL, SimCLR) and texts (e.g., BERT) and has shown promising results in other data modalities, including graphs, time-series, audio, etc. On a wide variety of tasks, SSL without using human-provided labels achieves performance that is close to fully supervised approaches.

The existing SSL research mostly focuses on improving the empirical performance without a theoretical foundation. While the proposed SSL approaches are empirically effective, theoretically why they perform well is not clear. For example, why certain auxiliary tasks in SSL perform better than others? How many unlabeled data examples are needed by SSL to learn a good representation? How is the performance of SSL affected by neural architectures?

In this workshop, we aim to bridge this gap between theory and practice. We bring together SSL-interested researchers from various domains to discuss the theoretical foundations of empirically well-performing SSL approaches and how the theoretical insights can further improve SSL’s empirical performance. Different from previous SSL-related workshops which focus on empirical effectiveness of SSL approaches without considering their theoretical foundations, our workshop focuses on establishing the theoretical foundation of SSL and providing theoretical insights for developing new SSL approaches.
We invite submissions of both theoretical works and empirical works, and the intersection of the two. The topics include but are not limited to:
Theoretical foundations of SSL
Sample complexity of SSL methods
Theory-driven design of auxiliary tasks in SSL
Comparative analysis of different auxiliary tasks
Comparative analysis of SSL and supervised approaches
Information theory and SSL
SSL for computer vision, natural language processing, robotics, speech processing, time-series analysis, graph analytics, etc.
SSL for healthcare, social media, neuroscience, biology, social science, etc.
Cognitive foundations of SSL

In addition to invited talks by leading researchers from diverse backgrounds including CV, NLP, robotics, theoretical ML, etc., the workshop will feature poster sessions and panel discussion to share perspectives on establishing foundational understanding of existing SSL approaches and theoretically-principled ways of developing new SSL methods. We accept submissions of short papers (up to 4 pages excluding references in NeurIPS format), which will be peer-reviewed by at least two reviewers. The accepted papers are allowed to be submitted to other conference venues.

Offline Reinforcement Learning

Aviral Kumar, Rishabh Agarwal, George Tucker, Lihong Li, Doina Precup, Aviral Kumar
2020-12-12T09:00:00-08:00 - 2020-12-12T18:00:00-08:00
The common paradigm in reinforcement learning (RL) assumes that an agent frequently interacts with the environment and learns using its own collected experience. This mode of operation is prohibitive for many complex real-world problems, where repeatedly collecting diverse data is expensive (e.g., robotics or educational agents) and/or dangerous (e.g., healthcare). Alternatively, Offline RL focuses on training agents with logged data in an offline fashion with no further environment interaction. Offline RL promises to bring forward a data-driven RL paradigm and carries the potential to scale up end-to-end learning approaches to real-world decision making tasks such as robotics, recommendation systems, dialogue generation, autonomous driving, healthcare systems and safety-critical applications. Recently, successful deep RL algorithms have been adapted to the offline RL setting and demonstrated a potential for success in a number of domains, however, significant algorithmic and practical challenges remain to be addressed. The goal of this workshop is to bring attention to offline RL, both from within and from outside the RL community discuss algorithmic challenges that need to be addressed, discuss potential real-world applications, discuss limitations and challenges, and come up with concrete problem statements and evaluation protocols, inspired from real-world applications, for the research community to work on.

For details on submission please visit: https://offline-rl-neurips.github.io/ (Submission deadline: October 9, 11:59 pm PT)

Speakers:
Emma Brunskill (Stanford)
Finale Doshi-Velez (Harvard)
John Langford (Microsoft Research)
Nan Jiang (UIUC)
Brandyn White (Waymo Research)
Nando de Freitas (DeepMind)

Machine Learning for Systems

Anna Goldie, Azalia Mirhoseini, Jonathan Raiman, Martin Maas, Xinlei XU
2020-12-12T09:00:00-08:00 - 2020-12-12T17:50:00-08:00
**NeurIPS 2020 Workshop on Machine Learning for Systems**

Website: http://mlforsystems.org/

Submission Link: https://cmt3.research.microsoft.com/MLFS2020/Submission/Index

Important Dates:

Submission Deadline: **October 9th, 2020** (AoE)
Acceptance Notifications: October 23rd, 2020
Camera-Ready Submission: November 29th, 2020
Workshop: December 12th, 2020

Call for Papers:

Machine Learning for Systems is an interdisciplinary workshop that brings together researchers in computer systems and machine learning. This workshop is meant to serve as a platform to promote discussions between researchers in these target areas.

We invite submission of up to 4-page extended abstracts in the broad area of using machine learning in the design of computer systems. We are especially interested in submissions that move beyond using machine learning to replace numerical heuristics. This year, we hope to see novel system designs, streamlined cross-platform optimization, and new benchmarks for ML for Systems.

Accepted papers will be made available on the workshop website, but there will be no formal proceedings. Authors may therefore publish their work in other journals or conferences. The workshop will include invited talks from industry and academia as well as oral and poster presentations by workshop participants.

Areas of interest:

* Supervised, unsupervised, and reinforcement learning research with applications to:
- Systems Software
- Runtime Systems
- Distributed Systems
- Security
- Compilers, data structures, and code optimization
- Databases
- Computer architecture, microarchitecture, and accelerators
- Circuit design and layout
- Interconnects and Networking
- Storage
- Datacenters
* Representation learning for hardware and software
* Optimization of computer systems and software
* Systems modeling and simulation
* Implementations of ML for Systems and challenges
* High quality datasets for ML for Systems problems

Submission Instructions:

We welcome submissions of up to 4 pages (not including references). This is not a strict limit, but authors are encouraged to adhere to it if possible. All submissions must be in PDF format and should follow the NeurIPS 2020 format. Submissions do not have to be anonymized.

Please submit your paper no later than October 9th, 2020 midnight anywhere in the world to CMT (Link available soon).

Deep Learning through Information Geometry

Pratik Chaudhari, Alex Alemi, Varun Jog, Dhagash Mehta, Frank Nielsen, Stefano Soatto, Greg Ver Steeg
2020-12-12T09:20:00-08:00 - 2020-12-12T18:30:00-08:00
Attempts at understanding deep learning have come from different disciplines, namely physics, statistics, information theory, and machine learning. These lines of investigation have very different modeling assumptions and techniques; it is unclear how their results may be reconciled together. This workshop builds upon the observation that Information Geometry has strong overlaps with these directions and may serve as a means to develop a holistic understanding of deep learning. The workshop program is designed to answer two specific questions. The first question is: how do geometry of the hypothesis class and information-theoretic properties of optimization inform generalization. Good datasets have been a key propeller of the empirical success of deep networks. Our theoretical understanding of data is however poor. The second question the workshop will focus on is: how can we model data and use the understanding of data to improve optimization/generalization in the low-data regime.

Gather.Town link: https://neurips.gather.town/app/vPYEDmTHeUbkACgf/dl-info-neurips2020