Skip to yearly menu bar Skip to main content


Workshop

New Frontiers of AI for Drug Discovery and Development

Animashree Anandkumar · Ilija Bogunovic · Ti-chiun Chang · Quanquan Gu · Jure Leskovec · Michelle Li · Chong Liu · Nataša Tagasovska · Mengdi Wang · Wei Wang
Dec 15, 6:15 AM - 3:20 PM Room 242

We will facilitate interdisciplinary discussions to identify gaps and opportunities for AI in the drug discovery and development pipeline.

Show more
View full details
Workshop

Generative AI for Education (GAIED): Advances, Opportunities, and Challenges

Paul Denny · Sumit Gulwani · Neil Heffernan · Tanja Käser · Steven Moore · Anna Rafferty · Adish Singla
Dec 15, 6:15 AM - 3:30 PM Room 265 - 268

GAIED (pronounced "guide") aims to bring together researchers, educators, and practitioners to explore the potential of generative AI for enhancing education.

Show more
View full details
Workshop

UniReps: Unifying Representations in Neural Models

Marco Fumero · Emanuele Rodolà · Francesco Locatello · Gintare Karolina Dziugaite · Mathilde Caron · Clémentine Dominé
Dec 15, 6:15 AM - 3:15 PM Great Hall (level 1)

Neural models tend to learn similar representations when subject to similar stimuli; this behavior has been observed both in biological and artificial settings.The emergence of these similar representations is igniting a growing interest in the fields of neuroscience and artificial intelligence. To gain a theoretical understanding of this phenomenon, promising directions include: analyzing the learning dynamics and studying the problem of identifiability in the functional and parameter space. This has strong consequences in unlocking a plethora of applications in ML from model fusion, model stitching, to model reuse and in improving the understanding of biological and artificial neural models. The objective of the workshop is to discuss theoretical findings, empirical evidence and practical applications of this phenomenon, benefiting from the cross-pollination of different fields (ML, Neuroscience, Cognitive Science) to foster the exchange of ideas and encourage collaborations.Overall the questions we aim to investigate are when, why and how internal representations of distinct neural models can be unified into a common representation.

Show more
View full details
Workshop

Deep Generative Models for Health

Emanuele Palumbo · Laura Manduchi · Sonia Laguna · Melanie F. Pradier · Vincent Fortuin · Stephan Mandt · Julia Vogt
Dec 15, 6:15 AM - 3:30 PM Room 260 - 262

Deep generative models have recently gained increasing attention in machine learning research with recent breakthroughs, such as Stable Diffusion, DALL-E, and Chat-GPT, among others. Despite significant advancements, the potential of generative AI in the health sector is yet not fully exploited. To address this gap, our workshop serves as a forum for presenting the latest research trends in generative models tailored for health applications. By bringing together a diversified pool of experts, we aim to investigate the methodological requirements and clinical implications of generative AI for health applications, thus shedding light on the challenges that lie ahead. Through this collaborative effort, we aspire to unlock the potential of generative models for groundbreaking advancements in the health sector.

Show more
View full details
Workshop

Causal Representation Learning

Sara Magliacane · Atalanti Mastakouri · Yuki Asano · Claudia Shi · Cian Eastwood · Sébastien Lachapelle · Bernhard Schölkopf · Caroline Uhler
Dec 15, 6:15 AM - 3:30 PM Room 243 - 245

Can we learn causal representations from raw data, e.g. images? This workshop connects research in causality and representation learning.

Show more
View full details
Workshop

Associative Memory & Hopfield Networks in 2023

Parikshit Ram · Hilde Kuehne · Daniel Lee · Cengiz Pehlevan · Mohammed Zaki · Lenka Zdeborová
Dec 15, 6:15 AM - 3:30 PM Room 223

This workshop will discuss the latest multidisciplinary developments in Associative Memory and Hopfield Networks. A number of leading researchers in this research area from around the world have already agreed to attend and present their latest results. We anticipate sharing their presentations and outlining future research directions in this emerging field with the rest of the NeurIPS community.

Tagline: We will discuss recent multidisciplinary developments in Hopfield Networks and outline future research directions in this emerging field.

Show more
View full details
Workshop

Touch Processing: a new Sensing Modality for AI

Roberto Calandra · Haozhi Qi · Mike Lambeta · Perla Maiolino · Yasemin Bekiroglu · Jitendra Malik
Dec 15, 6:15 AM - 3:30 PM Room 214

This workshop aims to seed foundations of using AI/ML dedicated to studying touch and enable future applications such as robotics and AR/VR.

Show more
View full details
Workshop

NeurIPS 2023 Workshop: Machine Learning and the Physical Sciences

Brian Nord · Atilim Gunes Baydin · Adji Bousso Dieng · Emine Kucukbenli · Siddharth Mishra-Sharma · Benjamin Nachman · Kyle Cranmer · Gilles Louppe · Savannah Thais
Dec 15, 6:15 AM - 3:30 PM Hall B2 (level 1)

Physical sciences and machine learning: more than the sum of their parts. Join us to discuss research at the convergence of these fields!

Show more
View full details
Workshop

Foundation Models for Decision Making

Sherry Yang · Ofir Nachum · Yilun Du · Stephen McAleer · Igor Mordatch · Linxi Fan · Jeannette Bohg · Dale Schuurmans
Dec 15, 6:15 AM - 3:30 PM Hall E2 (level 1)

Foundation models pretrained on diverse vision and language datasets have demonstrated exceptional capabilities in performing a wide range of downstream vision and language tasks. As foundation models are deployed in real-world applications such as dialogue, autonomous driving, healthcare, and robotics, they inevitably face new challenges such as learning from external feedback, adapting to different task modalities, and performing long-term reasoning and planning. Such challenges have traditionally been at the core of sequential decision making, encompassing areas such as reinforcement learning, imitation learning, planning, search, and optimal control. These research fields have traditionally focused on task-specific settings with limited prior knowledge, and yet there has been significant research progress in surpassing human performance in tasks like playing board games and Atari video games, as well as operating robots to complete navigation and manipulation tasks. However, since these methods generally learn to solve a specific task from scratch without broad knowledge from vision and language, they can struggle with generalization and sample efficiency. The goal of this workshop is to bring together the sequential decision making community including planning, search, RL, and optimal control, together with the foundation models community in vision and language to confront the challenges in decision making at scale. The workshop will span high-level discussions on how foundation models and decision making can benefit each other when jointly considered and low-level algorithmic details of various decision making algorithms and vision-language architectures, which might lead to both opportunities or challenges. Specific topics, for example, will include foundation model agents interacting with humans, computers, tools, simulators, physical world, and each other.

Show more
View full details
Workshop

Information-Theoretic Principles in Cognitive Systems (InfoCog)

Noga Zaslavsky · Rava Azeredo da Silveira · Ronit Bustin · Ron M. Hecht
Dec 15, 6:15 AM - 3:30 PM Room 215 - 216

Information theory provides a mathematical framework allowing to formulate and quantify the basic limitations of data compression and communication. The notions of data compression and communication, based in analog and digital communication, are also relevant toother domains; as such, information theory spans a number of research fields. Aiming to formulate, understand, and quantify the storage and processing of information is a thread that ties together these disparate fields, and especially the study of cognition in humans and machines. Specifically, the desire to reach an integrative computational theory of human and artificial cognition, is attempted by leveraging information-theoretic principles as bridges between various cognitive functions and neural representations. Insights from information theoretic formalization have also led to tangible outcomes which have influenced the operation of artificial intelligent systems. One example is the information bottleneck (IB) approach, yielding insights on learning in neural networks (NN), as well as tools for slow feature analysis and speech recognition. A central application of the IB approach on NN, is through the view of data transfer between layers as an autoencoder. The approach then uses a variational approximation of the IB to produce an objective for minimization that is feasible and results in efficient training (a.k.a. variational IB(VIB)). In the other direction, the variational autoencoder (VAE) framework has also been used to explain cognitive functions, as done for example in. The IB approach has also been applied to emergent communication (EC) in both humans and machines, using a vector quantization VIB(VQ-VIB) method, that extends the aforementioned VIB method. Another example is the trade-off between information and value in the context of sequential decision making. This corresponding formalism has led to tangible methods in the solution of sequential decision making problems and was even used in an experimental study of mouse navigation and study of drivers' eye gaze patterns and study of drivers' language models. In aiming at understanding machine learning (ML), specifically in the context of NNs, or cognition, we need theoretical principles (hypotheses) that can be tested. To quote Shannon: I personally believe that many of the concepts of information theory will prove useful in these other fields-and, indeed, some results are already quite promising-but the establishing of such applications is not a trivial matter of translating words to a new domain, but rather the slow tedious process of hypothesis and experimental verification. If, for example, the human being acts in some situations like an ideal decoder, this is an experimental and not a mathematical fact, and as such must be tested under a wide variety of experimental situations. Today, both ML and cognition can entertain huge amounts of data. Establishing quantitative theories and corresponding methods for computation can have a massive impact on progress in these fields. Broadly, this workshop aims to further the understanding of information flow in cognitive processes and neural networks models of cognition. More concretely, this year’s workshop goals are twofold. On the one hand we wish to provide a fruitful platform for discussions relating to formulations of storage and processing of information either in human or artificial cognition systems, via information-theoretic measures, as those formalisms mentioned above. Specifically, the workshop comes to allow information theory researchers to take part in such discussions, allowing first-hand sharing of knowledge and ideas. On the other hand, we hope this workshop can advance, sharpen and enhance the research done around the computation of information theoretic quantities, specifically for the needs and benefits of cognition research. The two aims of the workshop are not independent of one another - any information theoretic formalism that we wish to experimentally verify has to be, in some sense, computationally feasible. Moreover, we wish that computation and estimation methods are developed in a way that is tailored to the open questions in human and artificial cognition. The proposed workshop focuses on bringing together researchers interested in integrating information-theoretic approaches with researchers focused on the computation/estimation of information-theoretic quantities, with the aim of tightening the collaboration between the two communities. Researchers interested in integrating information-theoretic approaches come from cognitive science, neuroscience, linguistics, economics, and beyond. Efforts in the computation/estimation of information-theoretic quantities are pursued for many reasons, and is a line of research gaining increasing attention due to advances in ML. Furthermore, these researchers have created in recent years new methods to measure information-related quantities.

Show more
View full details
Workshop

AI for Accelerated Materials Design (AI4Mat-2023)

Santiago Miret · Benjamin Sanchez-Lengeling · Jennifer Wei · Vineeth Venugopal · Marta Skreta · N M Anoop Krishnan
Dec 15, 6:15 AM - 3:30 PM Room 228 - 230

The AI for Accelerated Materials Discovery (AI4Mat) Workshop 2023 provides an inclusive and collaborative platform where AI researchers and material scientists converge to tackle the cutting-edge challenges in AI-driven materials discovery and development. Our goal is to foster a vibrant exchange of ideas, breaking down barriers between disciplines and encouraging insightful discussions among experts from diverse disciplines and curious newcomers to the field. The workshop embraces a broad definition of materials design encompassing matter in various forms, such as crystalline and amorphous solid-state materials, glasses, molecules, nanomaterials, and devices. By taking a comprehensive look at automated materials discovery spanning AI-guided design, synthesis and automated material characterization, we hope to create an opportunity for deep, thoughtful discussion among researchers working on these interdisciplinary topics, and highlight ongoing challenges in the field.

Show more
View full details
Workshop

Machine Learning in Structural Biology Workshop

Hannah Wayment-Steele · Roshan Rao · Ellen Zhong · Sergey Ovchinnikov · Gabriele Corso · Gina El Nesr
Dec 15, 6:30 AM - 3:05 PM Room 208 - 210

Structural biology, the study of the 3D structure or shape of proteins and other biomolecules, has been transformed by breakthroughs from machine learning algorithms. While methods such as AlphaFold2 have made exponential progress in certain areas, many active and open challenges for the field remain, including modeling protein dynamics, predicting the structure of other classes of biomolecules such as RNA, and ultimately relating the structure of isolated proteins to the in vivo and contextual nature of their underlying function. These challenges are diverse and require interdisciplinary collaboration between ML and structural biology researchers. The 4th edition of the Machine Learning in Structural Biology (MLSB) workshop focuses on these challenges and opportunities. In a unique commitment of support, PRX Life journal has committed to waiving publication fees for accepted papers in a special collection for interested authors. We anticipate this workshop will be of significant interest to both ML researchers as well as computational / experimental biologists and will stimulate continued problem-solving and new directions in the field.

Show more
View full details
Workshop

Table Representation Learning Workshop

Madelon Hulsebos · Bojan Karlaš · Haoyu Dong · Gael Varoquaux · Laurel Orr · Pengcheng Yin
Dec 15, 6:30 AM - 3:30 PM Room 235 - 236

Tables are a promising modality for representation learning with too much application potential to ignore. However, tables have long been overlooked despite their dominant presence in the data landscape, e.g. data management and analysis pipelines. The majority of datasets in Google Dataset Search, for example, resembles typical tabular file formats like CSVs. Similarly, the top-3 most-used database management systems are all relational (RDBMS). Representation learning over tables (TRL), possibly combined with other modalities such as text or SQL, has shown impressive performance for tasks like table-based question answering, table understanding, and data preparation. More recently, TRL was shown to be effective for tabular ML as well, while researchers also started exploring the impressive capabilities of LLMs for table encoding and data manipulation. Follow our Twitter feed for updates: https://twitter.com/TrlWorkshop.

The first edition of the Table Representation Learning (TRL) workshop at NeurIPS 2022 gathered an enthusiastic community and stimulated new research and collaborations, which we aim to continue in 2023. The TRL workshop has three main goals:

(1) Motivate tables as a primary modality for representation and generative learning and advance the area further.
(2) Showcase impactful applications of pretrained table models and discussing future opportunities.
(3) Foster discussion and collaboration across the ML, NLP and DB communities.

Show more
View full details
Workshop

Instruction Tuning and Instruction Following

Qinyuan Ye · Yizhong Wang · Shayne Longpre · Yao Fu · Daniel Khashabi
Dec 15, 6:30 AM - 3:30 PM Room 220 - 222

Recent advancements in training large language models (LLMs) to follow “instructions” have significantly increased their ability to comprehend open-ended language commands, encompassing a wide range of needs, preferences, and values.

This remarkable transformation has led to the creation of remarkable industrial models such as GPT-4 and Bard, as well as an increased focus within the open-source and research communities: creating new benchmark and resources, developing new training methods, and understanding the limitations of these methods. Furthermore, instruction following powered by LLMs has proven to be effective in multi-modal settings, with applications in image editing and robotic command execution.

We organize this workshop to facilitate discussions on advancing instruction tuning methodologies and constructing general-purpose instruction-following models. We believe it is crucial to organize this workshop due to the prevalence of proprietary models with restricted access, thereby creating the need for an open platform to encourage discussions. Moreover, we aim to foster interdisciplinary collaboration by bringing together researchers from diverse fields such as natural language processing, computer vision, robotics, human-computer interaction, AI safety, among others, to share their latest findings and explore potential avenues for future research.

Show more
View full details
Workshop

AI meets Moral Philosophy and Moral Psychology: An Interdisciplinary Dialogue about Computational Ethics

Sydney Levine · Liwei Jiang · Jared Moore · Zhijing Jin · Yejin Choi
Dec 15, 6:45 AM - 3:30 PM Room 255 - 257

Be it in advice from a chatbot, suggestions on how to administer resources, or which content to highlight, AI systems increasingly make value-laden decisions. However, researchers are becoming increasingly concerned about whether AI systems are making the right decisions. These emerging issues in the AI community have been long-standing topics of study in the fields of moral philosophy and moral psychology. Philosophers and psychologists have for decades (if not centuries) been interested in the systematic description and evaluation of human morality and the sub-problems that come up when attempting to describe and prescribe answers to moral questions. For instance, philosophers and psychologists have long debated the merits of utility-based versus rule-based theories of morality, their various merits and pitfalls, and the practical challenges of implementing them in resource-limited systems. They have pondered what to do in cases of moral uncertainty, attempted to enumerate all morally relevant concepts, and argued about what counts as a moral issue at all.In some isolated cases, AI researchers have slowly started to adopt the theories, concepts, and tools developed by moral philosophers and moral psychologists. For instance, we use the "trolley problem" as a tool, adopt philosophical moral frameworks to tackle contemporary AI problems, and have begun developing benchmarks that draw on psychological experiments probing moral judgment and development.Despite this, interdisciplinary dialogue remains limited. Each field uses specialized language, making it difficult for AI researchers to adopt the theoretical and methodological frameworks developed by philosophers and psychologists. Moreover, many theories in philosophy and psychology are developed at a high level of abstraction and are not computationally precise. In order to overcome these barriers, we need interdisciplinary dialog and collaboration. This workshop will create a venue to facilitate these interactions by bringing together psychologists, philosophers, and AI researchers working on morality. We hope that the workshop will be a jumping-off point for long-lasting collaborations among the attendees and will break down barriers that currently divide the disciplines. The central theme of the workshop will be the application of moral philosophy and moral psychology theories to AI practices. Our invited speakers are some of the leaders in the emerging efforts to draw on theories in philosophy or psychology to develop ethical AI systems. Their talks will demonstrate cutting-edge efforts to do this cross-disciplinary work, while also highlighting their own shortcomings (and those of the field more broadly). Each talk will receive a 5-minute commentary from a junior scholar in a field that is different from that of the speaker. We hope these talks and commentaries will inspire conversations among the rest of the attendees.

Show more
View full details
Workshop

Computational Sustainability: Promises and Pitfalls from Theory to Deployment

Suzanne Stathatos · Christopher Yeh · Laura Greenstreet · Tarun Sharma · Katelyn Morrison · Yuanqi Du · Chenlin Meng · Sherrie Wang · Fei Fang · Pietro Perona · Yoshua Bengio
Dec 15, 6:45 AM - 3:15 PM Room 238 - 239

Computational sustainability (CompSust) is an interdisciplinary research area that uses compu- tational methods to help address the 17 United Nations Sustainable Development Goals (UN SDGs), including but not limited to hunger and poverty reduction, infrastructure development, and environmental conservation. Computational sustainability is a two-way street: sustain- ability domains benefit from computational tools and methods and computational research areas benefit from the unique challenges that arise in attempting to address sustainability problems, including noisy and biased data, complex multi-agent systems, and multi-objective problems. Previous computational sustainability problems have led to new approaches in computer vision, reinforcement learning, multi-agent systems, and decision-focused learning. While computational sustainability problems span many domains, they share common challenges. This workshop will bring the community together to focus on two topics:1. The path from theory to deployment: Many challenges arise on the path from theory to deployment. This workshop will help researchers navigate this path by bringing together participants and speakers from academia, industry, and non-profits, highlighting successes going from theory to deployment, and facilitating collaboration.2. Promises and pitfalls: Advances on ML benchmarks do not always translate to improvements in computational sustainability problems, with contributing factors including low- signal-to-noise ratios, ever changing conditions, and biased or imbalanced data. However, due to the difficulties of publishing negative results, these findings rarely reach the community leading to duplicated effort and obscuring important gaps in existing methods.The goals of this workshop are to (i) identify pathways from theory to deployment, including best-practices and measures to quantify success, (ii) facilitate discussion and collaboration between participants from academia, industry, and the non-profit sector, and (iii) identify common failure modes and high-impact research directions, including “moonshot” challenges.

Show more
View full details
Workshop

Attributing Model Behavior at Scale (ATTRIB)

Tolga Bolukbasi · Logan Engstrom · Kelvin Guu · Andrew Ilyas · Sam Park · Ellie Pavlick · Anders Søgaard
Dec 15, 6:45 AM - 3:30 PM Room 271 - 273

Recently-developed algorithmic innovations (e.g., transformers, diffusion models ) and large-scale datasets (e.g., Common Crawl, LAION) have given rise to machine learning models with impressive capabilities. However, there is much left to understand in how these different factors combine to give rise to observed behaviors. For example, we still do not fully understand how the composition of training datasets influence downstream model capabilities (e.g., which data sources within LAION-5B are important for training high-quality CLIP embeddings?), how to attribute model capabilities to subcomponents inside the model(e.g., can we identify which subnetwork of a LLM implements addition ?), and which algorithmic choices really drive performance (e.g., is RL necessary to align language models?).A common theme underlying all these challenges is model behavior attribution. That is, the need to tie model behavior back to factors in the machine learning pipeline---such as the choice of training dataset or particular training algorithm---that we can control or reason about. This workshop aims to bring together researchers and practitioners that advance our understanding of model behavior attribution in the contexts that span: data, models, and learning algorithms.

Show more
View full details
Workshop

NeurIPS 2023 Workshop on Diffusion Models

Bahjat Kawar · Valentin De Bortoli · Charlotte Bunne · James Thornton · Jiaming Song · Jong Chul Ye · Chenlin Meng
Dec 15, 6:50 AM - 3:30 PM Hall B1 (level 1)

Over the past three years, diffusion models have established themselves as a new generative modeling paradigm. Their empirical successes have broadened the applications of generative modeling to image, video, audio, 3D synthesis, science applications, and more. As diffusion models become more and more popular and are applied to extremely diverse problems, it also becomes harder to follow the key contributions in the field. This workshop aims to keep track of recent advances and identify guidelines for future research. By bringing together practice, methodology, and theory actors we aim to identify unexplored areas, foster collaboration, and push the frontier of diffusion model research.

Link to website: https://diffusionworkshop.github.io/

Ask questions to our panelists here: https://docs.google.com/forms/d/e/1FAIpQLSeTRsWFvKlsFg31K8Vq6hHGOydmvd7YNMuOLOCcKgqSqO8mXw/viewform

Show more
View full details
Workshop

Workshop on robustness of zero/few-shot learning in foundation models (R0-FoMo)

Ananth Balashankar · Saurabh Garg · Jindong Gu · Amrith Setlur · Yao Qin · Aditi Raghunathan · Ahmad Beirami
Dec 15, 6:50 AM - 3:30 PM La Nouvelle Orleans Ballroom A+B (level 2)

Recent advances in the capabilities of large foundation models have been catalyzed by repurposing pretrained models to domain specific use cases through few-shot learning methods like prompt-tuning, in-context-learning; and zero-shot learning based on task descriptions. Given a few labeled examples that outline a new task [T5, GPT2, T0, DALL-E, CLIP], these large foundation models have demonstrably improved upon previous few-shot learning benchmarks [T-few, LAION]. We are closer than ever to learn from very few examples; and recent works [Frozen, Flamingo] have proposed methods to use large language and vision transformer models directly on these few examples, instead of human annotation to create large datasets for fine-tuning. The lessons learned from past-work in counterfactual reasoning, domain adaptation, meta-learning, continual learning, and adversarial training have to be revisited with a new lens towards improving robustness of few-shot learning methods or learning from no supervision (i.e., unlabeled data) that scale to multiple tasks in a safe and responsible manner. In addition to leveraging few-shot learning methods with labeled examples, there is also significant potential in harnessing the power of unlabeled data. When labeled and unlabeled data are from the same distribution, semi-supervised learning methods can be modified to now utilize large foundation models that can further improve boost performance over purely few-shot algorithms. Furthermore, similar ideas need to be explored for unsupervised domain adaptation, to improve robustness of fine-tuned methods to distribution shifts when the unlabeled data distribution is much broader than the distribution from which the labeled examples are collected.

Show more
View full details
Workshop

Agent Learning in Open-Endedness Workshop

Minqi Jiang · Mikayel Samvelyan · Jack Parker-Holder · Mayalen Etcheverry · Yingchen Xu · Michael Dennis · Roberta Raileanu
Dec 15, 7:00 AM - 3:00 PM Room 211 - 213

Open-ended learning (OEL) is receiving rapidly growing attention in recent years, as deep learning models become ever more adept at learning meaningful and useful behaviors from web-scale data. Improving the performance and generality of such models depends greatly on our ability to continue to collect new and useful training data. OEL systems co-evolve the learning agent (e.g. the model) with its environment or other sources of training data, resulting in the continued, active generation of new training data specifically useful for the current agent or model. Conceivably such OEL processes, if designed appropriately, can lead to models exhibiting increasingly general capabilities. However, it remains an open problem to produce a truly open-ended system in practice, one that endlessly generates meaningfully novel data. We hope our workshop provides a forum both for bridging knowledge across a diverse set of relevant fields as well as sparking new insights that can enable truly open-ended learning systems.

Show more
View full details
Workshop

Algorithmic Fairness through the Lens of Time

Awa Dieng · Miriam Rateike · Golnoosh Farnadi · Ferdinando Fioretto · Jessica Schrouff
Dec 15, 7:00 AM - 3:30 PM Room 252 - 254
We are proposing the Algorithmic Fairness through the Lens of Time (AFLT) workshop, which isthe fourth edition of this workshop series on algorithmic fairness. Previous editions have looked atcausal approaches to fairness and the intersection of fairness with other fields of trustworthy machinelearning namely interpretability, robustness and privacy.The aim of this year’s workshop is to provide a venue to discuss foundational work on fairness,challenge existing static definitions of fairness (group, individual, causal) and explore the long-termeffects of fairness methods. More importantly, the workshop aims to foster an open discussion on howto reconcile existing fairness frameworks with the development and proliferation of large generativemodels.$$$$Topic $$$$Fairness has been predominantly studied under the static regime, assuming an unchangingdata generation process [Hardt et al., 2016a, Dwork et al., 2012, Agarwal et al., 2018, Zafar et al.,2017]. However, these approaches neglect the dynamic interplay between algorithmic decisions andthe individuals they impact, which have shown to be prevalent in practical settings [Chaney et al.,2018, Fuster et al., 2022]. Such observation has highlighted the need to study the long term effectof fairness mitigation strategies and incorporate dynamic systems within the development of fairalgorithms.Despite prior research identifying several impactful scenarios where such dynamics can occur,including bureaucratic processes [Liu et al., 2018], social learning [Heidari et al., 2019], recourse[Karimi et al., 2020], and strategic behavior [Hardt et al., 2016b, Perdomo et al., 2020], extensiveinvestigation of the long term effect of fairness methods remains limited. Initial studies have shownhow enforcing static fairness constraints in dynamical systems can lead to unfair data distributionsand may perpetuate or even amplify biases [Zhang et al., 2020, Creager et al., 2020, D’Amour et al.,2020].Additionally, the rise of powerful large generative models have brought at the forefront the need tounderstand fairness in evolving systems. The general capabilities and widespread use of these modelsraise the critical question of how to assess these models for fairness[Luccioni et al., 2023] and mitigateobserved biases [Ranaldi et al., 2023, Ma et al., 2023] within a long term perspective. Importantly,mainstream fairness frameworks have been developed around classification and prediction tasks. Howcan we reconcile these existing techniques (proprocessing, in-processing and post-processing) withthe development of large generative models?Given these interesting questions, this workshop aims to deeply investigate how to address fairness concerns in settings where learning occurs sequentially or in evolving environments. We are particularly interested in addressing open questions in the field, such as:• What are the long term effects of static fairness methods?• How to develop adaptable fairness approaches under known or unknown dynamic environments?• Are there trade-offs between short-term and long-term fairness?• How to incorporate existing fairness frameworks into the development of large generativemodels?• How to ensure long term fairness in large generative models via feedback loops?
Show more
View full details
Workshop

New Frontiers in Graph Learning (GLFrontiers)

Jiaxuan You · Rex Ying · Hanjun Dai · Ge Liu · Azalia Mirhoseini · Smita Krishnaswamy · Chaoran Cheng
Dec 15, 7:00 AM - 3:00 PM Hall C2 (level 1 gate 9 south of food court)

Overview: Graph learning has grown into an established sub-field of machine learning in recent years. Researchers have been focusing on developing novel model architectures, theoretical understandings, scalable algorithms and systems, and successful applications across industry and science regarding graph learning. With the success of the New Frontiers in Graph Learning (GLFrontiers) Workshop in NeurIPS 2022, we hope to continue to promote the exchange of discussions and ideas regarding the future of graph learning in NeurIPS 2023.Challenges: Despite the success of graph learning in various applications, the recent machine learning research trends, especially the research towards foundation models and large language models, have posed challenges for the graph learning field. For example, regarding the model architecture, Transformer-based models have been shown to be superior to graph neural networks in certain small graph learning benchmarks. In terms of usability, with language as a generic user interface, it is still a research frontier to explore whether natural language can also interact with ubiquitous graph-structured data and whether it is feasible to build generic foundation models for graphs. Lastly, while graph learning has achieved recent exciting results in molecule and protein design, exploring how graph learning can accelerate scientific discoveries in other disciplines remains an open question.Goal: The primary goal of this workshop is to expand the impact of graph learning beyond the current boundaries. We believe that graph, or relation data, is a universal language that can be used to describe the complex world. Ultimately, we hope graph learning will become a generic tool for learning and understanding any type of (structured) data. In GLFrontiers 2023, we specifically aim to discuss the future of graph learning in the era of foundation models and envision how graph learning can contribute to scientific discoveries.

Show more
View full details
Workshop

6th Workshop on Artificial Intelligence for Humanitarian Assistance and Disaster Response

Ritwik Gupta · Thomas Manzini · Robin Murphy · Eric Heim · Bertrand Saux · Katie Picchione
Dec 15, 7:00 AM - 2:30 PM Room 240 - 241

Natural disasters are one of the oldest threats to both individuals and the societies they co-exist in. As a result, humanity has ceaselessly sought way to provide assistance to people in need after disasters have struck. Further, natural disasters are but a single, extreme example of the many possible humanitarian crises. Disease outbreak, famine, and oppression against disadvantaged groups can pose even greater dangers to people that have less obvious solutions. In this proposed workshop, we seek to bring together the Artificial Intelligence (AI) and Humanitarian Assistance and Disaster Response (HADR) communities in order to bring AI to bear on real-world humanitarian crises. Through this workshop, we intend to establish meaningful dialogue between the communities.By the end of the workshop, the NeurIPS research community can come to understand the practical challenges of aiding those who are experiencing crises, while the HADR community can understand the landscape that is the state of art and practice in AI. Through this, we seek to begin establishing a pipeline of transitioning the research created by the NeurIPS community to real-world humanitarian issues.

Show more
View full details
Workshop

Workshop on Distribution Shifts: New Frontiers with Foundation Models

Rebecca Roelofs · Fanny Yang · Hongseok Namkoong · Masashi Sugiyama · Jacob Eisenstein · Pang Wei Koh · Shiori Sagawa · Tatsunori Hashimoto · Yoonho Lee
Dec 15, 7:00 AM - 3:00 PM Room R06-R09 (level 2)

Tagline: This workshop focuses on distribution shifts in the context of foundation models.Distribution shifts---where a model is deployed on a data distribution different from what it was trained on---pose significant robustness challenges in real-world ML applications. Such shifts are often unavoidable in the wild and have been shown to substantially degrade model performance in a wide range of applications. For example, models can systematically fail when tested on patients from different hospitals or people from different demographics. Training models that are robust to such distribution shifts is a rapidly growing area of interest in the ML community, and the goal of our workshop is to foster discussions and further research on distribution shifts. In the context of distribution shifts, our workshop this year focuses on foundation models: large pretrained models that can be adapted for a wide range of tasks. Foundation models open up an exciting new frontier in the study of distribution shifts, raising open research questions such as how pre-training improves robustness, how to finetune foundation models for increased robustness, how to leverage foundation models’ generative capabilities for robustness, and how to handle discrepancies between standard pre-training distributions and downstream distributions of interest. We aim to facilitate discussions around these topics by bringing together researchers working on distribution shifts and foundation models.

Show more
View full details
Workshop

Heavy Tails in ML: Structure, Stability, Dynamics

Mert Gurbuzbalaban · Stefanie Jegelka · Michael Mahoney · Umut Simsekli
Dec 15, 7:00 AM - 3:30 PM Room R02-R05 (level 2)

Heavy-tails and chaotic behavior naturally appear in many ways in ML. We aim to understand how they emerge and how they affect the properties of ML methods.

Show more
View full details
Workshop

Backdoors in Deep Learning: The Good, the Bad, and the Ugly

Khoa D Doan · Aniruddha Saha · Anh Tran · Yingjie Lao · Kok-Seng Wong · Ang Li · HARIPRIYA HARIKUMAR · Eugene Bagdasarian · Micah Goldblum · Tom Goldstein
Dec 15, 7:00 AM - 3:00 PM Room 203 - 205

Deep neural networks (DNNs) are revolutionizing almost all AI domains and have become the core of many modern AI systems. While having superior performance compared to classical methods, DNNs are also facing new security problems, such as adversarial and backdoor attacks, that are hard to discover and resolve due to their black-box-like property. Backdoor attacks, particularly, are a brand-new threat that was only discovered in 2017 but has gained attention quickly in the research community. The number of backdoor-related papers grew from 21 to around 110 after only one year (2019-2020). In 2022 alone, there were more than 200 papers on backdoor learning, showing a high research interest in this domain.Backdoor attacks are possible because of insecure model pretraining and outsourcing practices. Due to the complexity and the tremendous cost of collecting data and training models, many individuals/companies just employ models or training data from third parties. Malicious third parties can add backdoors into their models or poison their released data before delivering it to the victims to gain illegal benefits. This threat seriously damages the safety and trustworthiness of AI development. Lately, many studies on backdoor attacks and defenses have been conducted to prevent this critical vulnerability.While most works consider backdoor ``evil'', some studies exploit it for good purposes. A popular approach is to use the backdoor as a watermark to detect illegal use of commercialized data/models. A few works employ the backdoor as a trapdoor for adversarial defense. Learning the working mechanism of backdoor also elevates a deeper understanding of how deep learning models work.This workshop is designed to provide a comprehensive understanding of the current state of backdoor research. We also want to raise awareness of the AI community on this important security problem, and motivate researchers to build safe and trustful AI systems.

Show more
View full details
Workshop

Goal-Conditioned Reinforcement Learning

Benjamin Eysenbach · Ishan Durugkar · Jason Ma · Andi Peng · Tongzhou Wang · Amy Zhang
Dec 15, 7:00 AM - 3:00 PM Room 206 - 207

Learning goal-directed behavior is one of the classical problems in AI, one that has received renewed interest in recent years and currently sits at the crossroads of many seemingly-disparate research threads: self-supervised learning , representation learning, probabilistic inference, metric learning, and duality.

Our workshop focuses on these goal-conditioned RL (GCRL) algorithms and their connections to different areas of machine learning. Goal-conditioned RL is exciting not just because of these theoretical connections with different fields, but also because it promises to lift some of the practical challenges with applying RL algorithms: users can specify desired outcomes with a single observation, rather than a mathematical reward function. As such, GCRL algorithms may be applied to problems varying from robotics to language models tuning to molecular design to instruction following.

Our workshop aims to bring together researchers studying the theory, methods, and applications of GCRL, researchers who might be well posed to answer questions such as:

1. How does goal-directed behavior in animals inform better GCRL algorithmic design?
2. How can GCRL enable more precise and customizable molecular generation?
3. Do GCRL algorithms provide an effective mechanism for causal reasoning?
4. When and how should GCRL algorithms be applied to precision medicine?

Show more
View full details
Workshop

MATH-AI: The 3rd Workshop on Mathematical Reasoning and AI

Zhenwen Liang · Albert Q. Jiang · Katie Collins · Pan Lu · Kaiyu Yang · Sean Welleck · James McClelland
Dec 15, 7:00 AM - 3:00 PM Room 217 - 219

Mathematical reasoning is a fundamental aspect of human cognition that has been studied by scholars ranging from philosophers to cognitive scientists and neuroscientists. Mathematical reasoning involves analyzing complex information, identifying patterns and relationships, and drawing logical conclusions from evidence. It is central to many applications in science, engineering, finance, and everyday contexts. Recent advancements in large language models (LLMs) have unlocked new opportunities at the intersection of artificial intelligence and mathematical reasoning, ranging from new methods that solve complex problems or prove theorems, to new forms of human-machine collaboration in mathematics and beyond. Our proposed workshop is centered on the intersection of deep learning and mathematical reasoning, with an emphasis on, but not limited to, large language models. Our guiding theme is: "To what extent can machine learning models comprehend mathematics, and what applications could arise from this capability?'' To address this question, we aim to bring together a diverse group of scholars from different backgrounds, institutions, and disciplines in our workshop. By hosting this workshop, we hope to stimulate insightful discussions that will guide future research and applications in this rapidly expanding field.

Show more
View full details
Workshop

OPT 2023: Optimization for Machine Learning

Cristóbal Guzmán · Courtney Paquette · Katya Scheinberg · Aaron Sidford · Sebastian Stich
Dec 15, 7:00 AM - 3:01 PM Hall D2 (level 1)

Optimization lies at the heart of many machine learning algorithms and enjoys great interest in our community. Indeed, this intimate relation of optimization with ML is the key motivation for the OPT series of workshops. We aim to foster discussion, discovery, and dissemination of state-of-the-art research in optimization relevant to ML.

To foster the spirit of innovation and collaboration, a goal of this workshop, OPT 2023 will focus the contributed talks on research in "Optimization in the Wild"; this title is meant to encompass the new challenges that traditional optimization theory and algorithms face with the growth and variety of novel ML applications.

Successful applications of both theory and algorithms from optimization to ML frequently require a profound redesign or even entirely new approaches. This becomes apparent in settings where the classical (empirical) risk minimization approach is no longer sufficient to address the challenges of learning. As motivating examples, we consider the case of learning under (group or individual) fairness in distributed scenarios, learning under differential privacy, robustness, multi-task and transfer learning, as well as sampling from log-concave distributions. On the other hand, novel neural network architectures (such as transformers) require exploiting its structures for efficient optimization in crucial ways. For these models and problems: What is the role of optimization? What synergies can be exploited with the insights coming from these particular areas towards more efficient and reliable solutions? We will foster discussions directed at developing understanding of these challenges, and raising awareness of the capabilities and risks of using optimization in each of these areas.

Show more
View full details
Workshop

Gaze Meets ML

Amarachi Blessing Mbakwe · Joy T Wu · Dario Zanca · Elizabeth Krupinski · Satyananda Kashyap · Alexandros Karargyris
Dec 16, 6:15 AM - 3:00 PM Room 240 - 241

Eye gaze has proven to be a cost-efficient way to collect large-scale physiological data that can reveal the underlying human attentional patterns in real-life workflows, and thus has long been explored as a signal to directly measure human-related cognition in various domains. Physiological data (including but not limited to eye gaze) offer new perception capabilities, which could be used in several ML domains, e.g., egocentric perception, embodied AI, NLP, etc. They can help infer human perception, intentions, beliefs, goals, and other cognition properties that are much needed for human-AI interactions and agent coordination. In addition, large collections of eye-tracking data have enabled data-driven modeling of human visual attention mechanisms, both for saliency or scan path prediction, with twofold advantages: from the neuro-scientific perspective to understand biological mechanisms better, and from the AI perspective to equip agents with the ability to mimic or predict human behavior and improve interpretability and interactions.

The Gaze meets ML workshop aims at bringing together an active research community to collectively drive progress in defining and addressing core problems in gaze-assisted machine learning. This year the workshop will run its 2nd edition at NeurIPS again and it attracts a diverse group of researchers from academia and industry presenting novel works in this area of research.

Show more
View full details
Workshop

Adaptive Experimental Design and Active Learning in the Real World

Willie Neiswanger · Mojmir Mutny · Ilija Bogunovic · Ava Amini · Zi Wang · Stefano Ermon · Andreas Krause
Dec 16, 6:15 AM - 3:30 PM Room 208 - 210

Join us for an insightful workshop on adaptive experimental design and active learning. Dive into their use in fields like computational biology, materials discovery, chip design, and more.

Show more
View full details
Workshop

Temporal Graph Learning Workshop @ NeurIPS 2023

Shenyang Huang · Farimah Poursafaei · Kellin Pelrine · Julia Gastinger · Emanuele Rossi · Michael Bronstein · Reihaneh Rabbany
Dec 16, 6:15 AM - 3:30 PM Room 203 - 205

Temporal graph learning is an emerging area of research in graph representation learning, motivated by the prevalence of evolving and dynamic interconnected data in different domains and applications. In this workshop, which will be the second workshop on temporal graph learning, we plan to bring together researchers working on relevant areas to exchange ideas on different aspects of temporal graph learning including datasets for discrete and continuous time graphs, evaluation strategies, theoretical foundations, as well as using temporal graph learning paradigms in real-world applications.

Show more
View full details
Workshop

AI for Science: from Theory to Practice

Yuanqi Du · Max Welling · Yoshua Bengio · Marinka Zitnik · Carla Gomes · Jure Leskovec · Maria Brbic · Wenhao Gao · Kexin Huang · Ziming Liu · Rocío Mercado · Miles Cranmer · Shengchao Liu · Lijing Wang
Dec 16, 6:15 AM - 3:30 PM Hall C2 (level 1 gate 9 south of food court)

AI is being increasingly integrated into scientific discovery to augment and accelerate research, helping scientists to generate hypotheses, design experiments, collect and interpret large datasets, and gain new insights that might not have been possible using traditional scientific methods alone. It has solved scientific challenges that were unimaginable before, e.g., predicting 3D protein structures, simulating molecular systems, forecasting global climate, and discovering new scientific laws. Despite this promise, several critical gaps stifle algorithmic and scientific innovation in "AI for Science," and the overarching goal of this workshop is to grow AI for Science by closing these gaps: * Gap 1: Science of science. The principles of scientific methods have remained unchanged since the 17th century. How AI can facilitate the practice of scientific discovery itself often remains undiscussed. For example, instead of the numerous hypothesis-experiment cycles to make sense of a scientific phenomenon, can AI reason and output natural laws directly?* Gap 2: Limited exploration at the intersections of multiple disciplines. Solutions to grand challenges stretch across various disciplines. For example, protein structure prediction requires collaboration across physics, chemistry, and biology, and single-cell imaging of whole tumors can be approached by cosmology algorithms that connect cells as stars.* Gap 3: Unified ecosystems of datasets, models, and scientific hypotheses. Comprehensive ecosystems and engagements of the research community, e.g., accumulation of datasets, open-source platforms, and benchmarks, are needed to reliably evaluate AI tools and integrate them into scientific workflows and instruments so that they can contribute to scientific understanding or acquire it autonomously. The workshop will emphasize this indispensable ingredient to the success of AI for Science and engage in discussions around it.* Gap 4: Responsible use and development of AI for science. Interest in AI across scientific disciplines has grown, but very few AI models have progressed to routine use in practice. We plan to present a roadmap and guidelines for accelerating the translation of AI in science. To be successful, translation will require a team of engaged stakeholders and a systematic process from beginning (problem formulation) to end (widespread deployment).* Gap 5: Lack of educational resources. A critical element to increase the adoption of AI for scientific discovery across disciplines is to create accessible education materials and AI-lab protocols for both AI researchers and scientists with different areas of expertise, seniority, and level of interest.* Gap 6: Unrealistic methodological assumptions or directions. While AI researchers strive for methodological advances, they can make unrealistic assumptions that can limit the applicability of new algorithms, their adoption in real-world settings, and transition into implementation (e.g., at a particle accelerator, genome sequencing lab, or quantum chemistry lab). For example, while state-of-the-art molecule generation AI models perform well on benchmarks, they often generate molecules that can't be synthesized in a lab.

Show more
View full details
Workshop

Workshop on Advancing Neural Network Training (WANT): Computational Efficiency, Scalability, and Resource Optimization

Julia Gusak · Jean Kossaifi · Alena Shilova · Rocco Sedona · Cristiana Bentes · Animashree Anandkumar · Olivier Beaumont
Dec 16, 6:15 AM - 3:30 PM Room 243 - 245

Unlock neural network training's potential for good and science! Enhance computational efficiency, scalability, and resource optimization. Join HPC and AI experts to tackle challenges in theory and applications.

Show more
View full details
Workshop

Generative AI and Biology (GenBio@NeurIPS2023)

Minkai Xu · Regina Barzilay · Jure Leskovec · Wenxian Shi · Menghua Wu · Zhenqiao Song · Lei Li · Fan Yang · Stefano Ermon
Dec 16, 6:15 AM - 3:30 PM Room 265 - 268

Advancing biological discovery, therapeutic design, and pharma development through generative AI.

Show more
View full details
Workshop

6th Robot Learning Workshop: Pretraining, Fine-Tuning, and Generalization with Large Scale Models

Dhruv Shah · Paula Wulkop · Claas Voelcker · Georgia Chalvatzaki · Alex Bewley · Hamidreza Kasaei · Ransalu Senanayake · Julien PEREZ · Jonathan Tompson
Dec 16, 6:15 AM - 3:30 PM Hall B2 (level 1)

The proposed workshop focuses on the intersection of machine learning (ML) and robotics, under this year’s focus topic: “Pretraining, Fine-Tuning, and Generalization with Large Scale Models.” Embodied AI and robotics pose unique challenges and opportunities for utilizing large pre-trained models. We seek to host a diverse set of views and approaches from across the robotics domain and dive deep into questions such as: What sources of data can be used for training large models in robotics? What role should pre-training play in robotics pipelines? How far can pre-trained models generalize when faced with novel tasks and environments? What is currently missing to the pre-training paradigm for embodied systems?

Show more
View full details
Workshop

Third Workshop on Efficient Natural Language and Speech Processing (ENLSP-III): Towards the Future of Large Language Models and their Emerging Descendants

Mehdi Rezagholizadeh · Peyman Passban · Yue Dong · Yu Cheng · Soheila Samiee · Lili Mou · Qun Liu · Boxing Chen
Dec 16, 6:15 AM - 3:15 PM Room 206 - 207

The third version of the Efficient Natural Language and Speech Processing (ENLSP-III) workshop will focus on the future of large language and speech foundation models; and how to make them more efficient in terms of Data, Model, Training, and Inference for real-world applications as well as academic research. The workshop program offers an interactive platform for gathering different experts and talents from academia and industry through invited talks, panel discussion, paper submissions, reviews, interactive posters, oral presentations and a mentorship program. This will be a unique opportunity to discuss and share challenging problems, build connections, exchange ideas and brainstorm solutions, and foster future collaborations. The topics of this workshop can be of interest for people working on general machine learning, deep learning, optimization, theory and NLP & Speech applications.

Show more
View full details
Workshop

Intrinsically Motivated Open-ended Learning (IMOL) Workshop

Cédric Colas · Laetitia Teodorescu · Nadia Ady · Cansu Sancaktar · Junyi Chu
Dec 16, 6:15 AM - 3:30 PM Room 260 - 262

How do humans develop broad and flexible repertoires of knowledge and skills? How can we design autonomous lifelong learning machines with the same abilities? The field of IMOL explores these questions through integrating research on the motivational forces, learning architectures, and developmental and environmental constraints supporting the acquisition of open-ended repertoires of skill and knowledge.

At this full-day in-person NeurIPS workshop, we will gather speakers from a wide diversity of scientific traditions, showcase on-going research via contributed talks and poster sessions, and provide networking opportunities for research and mentorship discussions.

Show more
View full details
Workshop

Generalization in Planning (GenPlan '23)

Pulkit Verma · Siddharth Srivastava · Aviv Tamar · Felipe Trevizan
Dec 16, 6:15 AM - 3:30 PM Room 238 - 239

This workshop aims to bridge highly active but largely parallel research communities, addressing the problem of generalizable and transferrable learning for all forms of sequential decision making (SDM), including reinforcement learning and AI planning. We expect that this workshop will play a key role in accelerating the speed of foundational innovation in SDM with a synthesis of the best ideas for learning generalizable representations of learned knowledge and for reliably utilizing the learned knowledge across different sequential decision-making problems. NeurIPS presents an ideal, inclusive venue for dialog and technical interaction among researchers spanning the vast range of research communities that focus on these topics.

Show more
View full details
Workshop

NeurIPS 2023 Workshop on Tackling Climate Change with Machine Learning: Blending New and Existing Knowledge Systems

Rasika Bhalerao · Mark Roth · Kai Jeggle · Jorge Montalvo Arvizu · Shiva Madadkhani · Yoshua Bengio
Dec 16, 6:15 AM - 3:30 PM Great Hall (level 1)

Climate change is a complex, multifaceted, and far-reaching challenge with increasingly severe consequences for humanity as natural disasters multiply, sea levels rise, and ecosystems falter. Actions to address climate change take many forms, from designing smart electric grids to tracking greenhouse gas emissions through satellite imagery. Machine learning is emerging as one necessary aspect to mitigating and adapting to climate change via a wide array of techniques. Using machine learning to address climate change, a subset of the "AI for society" research area, requires close interdisciplinary collaboration among various fields with diverse practitioners. This workshop is intended to form connections and foster cross-pollination between researchers in machine learning and experts in complementary climate-relevant fields, in addition to providing a forum for those in the machine learning community who wish to tackle climate change.

Show more
View full details
Workshop

NeurIPS 2023 Workshop on Machine Learning for Creativity and Design

Yingtao Tian · Tom White · Lia Coleman · Hannah Johnston
Dec 16, 6:15 AM - 2:40 PM Room 252 - 254

Machine co-creativity grows continually and exponentially with machine learning, especially with the recent surge of generative models on multiple domains. This workshop, as a continuation of a long series, explores these topics, including state-of-the-art algorithms for the creation, accessibility of these models for artists, social and cultural impact, as well as actual artistic applications. This workshop is consistent of Presentations by invited speakers, presentation of selected papers and artworks, two panels and an art showcase (collaborating with the chairs of the NeurIPS Creative AI track). The goal of this workshop is to bring together researchers and artists interested in exploring the intersection of human creativity and machine learning, and to look beyond technical issues to better understand the needs of artists and creators.

Show more
View full details
Workshop

Machine Learning for Audio

Brian Kulis · Sadie Allen · Sander Dieleman · Shrikanth Narayanan · Rachel Manzelli · Alice Baird · Alan Cowen
Dec 16, 6:20 AM - 3:30 PM Room 228 - 230

The Machine Learning for Audio Workshop at NeurIPS 2023 will bring together audio practitioners and machine learning researchers to a venue focused on various problems in audio, including music information retrieval, acoustic event detection, computational paralinguistics, speech transcription, multimodal modeling, and generative modeling of speech and other sounds. Our team has previously held multiple audio-related workshops at top machine learning venues, and both the organizing team and invited speakers represent broad diversity in terms of gender identity, affiliation, seniority, and geography. We also plan to solicit workshop papers on the topic.

Show more
View full details
Workshop

Workshop on Federated Learning in the Age of Foundation Models in Conjunction with NeurIPS 2023 (FL@FM-NeurIPS'23)

Jinghui Chen · Lixin Fan · Gauri Joshi · Sai Praneeth Karimireddy · Stacy Patterson · Shiqiang Wang · Han Yu
Dec 16, 6:25 AM - 3:30 PM Hall D2 (level 1)

An exciting forum for researchers to exchange the recent developments in federated learning in the modern age of foundation models.

Please visit our workshop webpage for full details: https://federated-learning.org/fl@fm-neurips-2023/

Show more
View full details
Workshop

Socially Responsible Language Modelling Research (SoLaR)

Usman Anwar · David Krueger · Samuel Bowman · Jakob Foerster · Su Lin Blodgett · Roberta Raileanu · Alan Chan · Laura Ruis · Robert Kirk · Yawen Duan · Xin Chen · Kawin Ethayarajh
Dec 16, 6:30 AM - 3:30 PM Room R06-R09 (level 2)

The inaugural Socially Responsible Language Modelling Research (SoLaR) workshop at NeurIPS 2023 is an interdisciplinary gathering that aims to foster responsible and ethical research in the field of language modeling. Recognizing the significant risks and harms [33-37] associated with the development, deployment, and use of language models, the workshop emphasizes the need for researchers to focus on addressing these risks starting from the early stages of development. The workshop brings together experts and practitioners from various domains and academic fields with a shared commitment to promoting fairness, equity, accountability, transparency, and safety in language modeling research. In addition to technical works on socially responsible language modeling research, we also encourage sociotechnical submissions from other disciplines such as philosophy, law, and policy, in order to foster an interdisciplinary dialogue on the societal impacts of LMs.

Show more
View full details
Workshop

The Symbiosis of Deep Learning and Differential Equations -- III

Luca Herranz-Celotti · Martin Magill · Ermal Rrapaj · Winnie Xu · Qiyao Wei · Archis Joglekar · Michael Poli · Animashree Anandkumar
Dec 16, 6:30 AM - 2:45 PM Room 255 - 257

In the deep learning community, a remarkable trend is emerging, where powerful architectures are created by leveraging classical mathematical modeling tools from diverse fields like differential equations, signal processing, and dynamical systems. Differential equations are a prime example: research on neural differential equations has expanded to include a large zoo of related models with applications ranging from time series analysis to robotics control. Score-based diffusion models are among state-of-the-art tools for generative modelling, drawing connections between diffusion models and neural differential equations. Other examples of deep architectures with important ties to classical fields of mathematical modelling include normalizing flows, graph neural diffusion models, Fourier neural operators, architectures exhibiting domain-specific equivariances, and latent dynamical models (e.g., latent NDEs, H3, S4, Hyena). The previous two editions of the Workshop on the Symbiosis of Deep Learning and Differential Equations have promoted the bidirectional exchange of ideas at the intersection of classical mathematical modelling and modern deep learning. On the one hand, this includes the use of differential equations and similar tools to create neural architectures, accelerate deep learning optimization problems, or study theoretical problems in deep learning. On the other hand, the Workshop also explores the use of deep learning methods to improve the speed, flexibility, or realism of computer simulations. Last year, we noted a particularly keen interest from the audience in neural architectures that leveraged classical mathematical models, such as those listed above. We therefore propose that the third edition of this Workshop focus on this theme.

Show more
View full details
Workshop

Optimal Transport and Machine Learning

Anna Korba · Aram-Alexandre Pooladian · Charlotte Bunne · David Alvarez-Melis · Marco Cuturi · Ziv Goldfeld
Dec 16, 6:30 AM - 3:30 PM Room 220 - 222

Over the last decade, optimal transport (OT) has evolved from a prize-winning research area in pure mathematics to a recurring theme bursting across many areas of machine learning (ML). Advancements in OT theory, computation, and statistics have fueled breakthroughs in a wide range of applications, from single-cell genomics \cite{schiebinger2019optimal} to generative modeling \cite{arjovsky2017wasserstein} and optimization of over-parametrized neural nets \cite{chizat2018global,de2021diffusion}, among many others. The OTML workshop series (in '14,~'17,~'19, and '21) has been instrumental in shaping this influential research thread. For~this new OTML installment, we aim even higher by hosting two exceptional plenary speakers: Luis Caffarelli, who received the 2023 Abel Prize for his seminal contributions to regularity theory for the Monge–Amp{`e}re equation and OT, and Felix Otto, the 2006 Leibniz Prize awardee and 2017 Blaise Pascal medalist, who made profound contributions to the theory of Wasserstein gradient flows. The OTML workshop will provide a unique platform to federate, disseminate, and advance current knowledge in this rapidly growing field. This, in turn, will facilitate cross-field fertilization and drive the community towards future groundbreaking discoveries.

Show more
View full details
Workshop

I Can’t Believe It’s Not Better (ICBINB): Failure Modes in the Age of Foundation Models

Estefany Kelly Buchanan · Fan Feng · Andreas Kriegler · Ian Mason · Tobias Uelwer · Yubin Xie · Rui Yang
Dec 16, 6:45 AM - 3:30 PM Room R02-R05 (level 2)

In the past year, tools such as ChatGPT, Stable Diffusion and SegmentAnything have had an immediate impact on our everyday lives. Many of these tools have been built using foundation models, that is, very large models (having billions or trillions of parameters) trained on vast amounts of data (Bommasani et al., 2021). The excitement around these foundation models and their capabilities might suggest that all the interesting problems have been solved and artificial general intelligence is just around the corner (Wei et al., 2022; Bubeck et al., 2023).

At this year’s I Can’t Believe It’s Not Better workshop we invite papers to cooly reflect on this optimism and to demonstrate that there are in fact many difficult and interesting open questions. The workshop will specifically focus on failure modes of foundation models, especially unexpected negative results. In addition, we invite contributions that will help us understand current and future disruptions of machine learning subfields as well as instances where these powerful methods merely remain complementary to another subfield of machine learning.

Contributions on the failure modes of foundation models might consider:
- Domain-specific areas where the application of foundation models did not work as expected.
- Failures in the safety and explainability of foundation models.
- The limits of current foundation model methodologies.

Besides failure modes of foundation models, this workshop also considers their impact on the ML ecosystem and potential problems that remain to be solved by these new systems. In this context, relevant questions include:
- Where do foundation models leave researchers in other areas (e.g., AI for science, recommender systems, Bayesian methods, bioinformatics)?
- Which important problems are not solved by training large models with large amounts of data?
- What unexpected negative results were encountered when applying foundation models to a specific domain?

Show more
View full details
Workshop

XAI in Action: Past, Present, and Future Applications

Chhavi Yadav · Michal Moshkovitz · Nave Frost · Suraj Srinivas · Bingqing Chen · Valentyn Boreiko · Himabindu Lakkaraju · Zico Kolter · Dotan Di Castro · Kamalika Chaudhuri
Dec 16, 6:50 AM - 3:30 PM Room 271 - 273

Transparency is vital for AI’s growth. This led to the design of new methods inexplainable AI. We aim to explore the current state of applied XAI and identifyfuture directions.

Show more
View full details
Workshop

Mathematics of Modern Machine Learning (M3L)

Zhiyuan Li · Tengyu Ma · Surbhi Goel · Kaifeng Lyu · Christina Baek · Bingbin Liu · Alex Damian · Aditi Raghunathan
Dec 16, 6:50 AM - 3:00 PM Room 242

This workshop explores theory for understanding and advancing modern ML practices: optimization, generalization, and foundation models.

Show more
View full details
Workshop

Regulatable ML: Towards Bridging the Gaps between Machine Learning Research and Regulations

Jiaqi Ma · Chirag Agarwal · Sarah Tan · Himabindu Lakkaraju · Usha Bhalla · Zana Bucinca · Zixi Chen · Junwei Deng · Xudong Shen · Varshini Subhash
Dec 16, 6:55 AM - 3:30 PM Room 215 - 216

This workshop brings together ML and policy experts to identify and address various technical and policy challenges that arise when regulating ML models.

Show more
View full details
Workshop

Medical Imaging meets NeurIPS

Daniel Moyer · DOU QI · Yuankai Huo · Konstantinos Kamnitsas · Andrea Lara · Xiaoxiao Li · Islem Rekik
Dec 16, 7:00 AM - 3:00 PM Hall B1 (level 1)

“Medical Imaging meets NeurIPS” aims to bring researchers together from the medical imaging and machine learning communities to create a cutting-edge venue for discussing the major challenges in the field and opportunities for research and novel applications. The proposed event will be the continuation of a successful workshop organized for the past 6 years. It will feature a series of invited speakers (all confirmed) from academia, medical sciences, and industry to present their latest work, and to present reviews of recent technological advances and remaining major challenges. This year we aim to have all keynotes presented in person (to facilitate speaker interaction and discourse), an extended number of submitted talks (approximately double from previous years), and an updated call that highlights changes occurring in our interdisciplinary field.

Show more
View full details
Workshop

Multi-Agent Security: Security as Key to AI Safety

Christian Schroeder de Witt · Hawra Milani · Klaudia Krawiecka · Swapneel Mehta · Carla Cremer · Martin Strohmeier
Dec 16, 7:00 AM - 3:30 PM Room 223

This workshop proposal builds on the observation that the AI and cyber security communities are currently not sufficiently interconnected to navigate risks and opportunities in our multi-agent world. Through a series of discussions involving experts and audiences, provocation and intervention keynotes, and contributed content, we aim to compare, contrast, and synthesize near- and long-term perspectives of AI deployment across society. The fundamental goal of this workshop is to bring together researchers, practitioners, and activists across AI and cyber security in order to create a blueprint for the future of AI security in a multi-agent world, and to define, explore, and challenge the nascent field of multi-agent security (MASEC).

Submission deadline: September 25, 2023
Acceptance Notification: October 27, 2023
Workshop date: December 16, 2023

Show more
View full details
Workshop

Machine Learning with New Compute Paradigms

Jannes Gladrow · Benjamin Scellier · Eric Xing · Babak Rahmani · Francesca Parmigiani · Paul Prucnal · Cheng Zhang
Dec 16, 7:00 AM - 3:00 PM Room 235 - 236

As GPU computing comes closer to a plateau in terms of efficiency and cost due to Moore’s law reaching its limit, there is a growing need to explore alternative computing paradigms, such as (opto-)analog, neuromorphic, and low-power computing. This NeurIPS workshop aims to unite researchers from machine learning and alternative computation fields to establish a new hardware-ML feedback loop.By co-designing models with specialized accelerators, we can leverage the benefits of increased throughput or lower per-flop power consumption. Novel devices hold the potential to further accelerate standard deep learning or even enable efficient inference and training of hitherto compute-constrained model classes. However, new compute paradigms typically present challenges such as intrinsic noise, restricted sets of compute operations, or limited bit-depth, and thus require model-hardware co-design. This workshop’s goal is to foster cross-disciplinary collaboration to capitalize on the opportunities offered by emerging AI accelerators.

Show more
View full details
Workshop

4th Workshop on Self-Supervised Learning: Theory and Practice

Tengda Han · Ishan Misra · Pengtao Xie · Mathilde Caron · Hilde Kuehne · Xingjian Bai · Vadim Tschernezki
Dec 16, 7:00 AM - 3:00 PM Room 217 - 219

The 4th Workshop on "Self-Supervised Learning: Theory and Practice" aims to discuss the theory and practice of self-supervised learning across multiple research areas like vision, NLP \& robotics.

Show more
View full details
Workshop

Synthetic Data Generation with Generative AI

Sergul Aydore · Zhaozhi Qian · Mihaela van der Schaar
Dec 16, 7:00 AM - 3:00 PM Hall E2 (level 1)

Synthetic data (SD) is data that has been generated by a mathematical model to solve downstream data science tasks. SD can be used to address three key problems: 1/ private data release, 2/ data de-biasing and fairness, 3/ data augmentation for boosting the performance of ML models. While SD offers great opportunities for these problems, SD generation is still a developing area of research. Systematic frameworks for SD deployment and evaluation are also still missing. Additionally, despite the substantial advances in Generative AI, the scientific community still lacks a unified understanding of how generative AI can be utilized to generate SD for different modalities.The goal of this workshop is to provide a platform for vigorous discussion from all these different perspectives with research communities in the hope of progressing the ideal of using SD for better and trustworthy ML training. Through submissions and facilitated discussions, we aim to characterize and mitigate the common challenges of SD generation that span numerous application domains. The workshop is jointly organized by academic researchers (University of Cambridge) and industry partners from tech (Amazon AI).

Show more
View full details
Workshop

Symmetry and Geometry in Neural Representations

Sophia Sanborn · Christian A Shewmake · Simone Azeglio · Nina Miolane
Dec 16, 7:00 AM - 3:00 PM La Nouvelle Orleans Ballroom A+B (level 2)

In recent years, there has been a growing appreciation for the importance of respecting the topological, algebraic, or geometric structure of data in machine learning models. In parallel, an emerging set of findings in computational neuroscience suggests that the preservation of this kind of mathematical structure may be a fundamental principle of neural coding in biology. The goal of this workshop is to bring together researchers from applied mathematics and deep learning with neuroscientists whose work reveals the elegant implementation of mathematical structure in biological neural circuitry. Group theory and differential geometry were instrumental in unifying the models of 20th-century physics. Likewise, they have the potential to unify our understanding of how neural systems form useful representations of the world.

Show more
View full details
Workshop

Learning-Based Solutions for Inverse Problems

Shirin Jalali · Chris Metzler · Ajil Jalal · Jon Tamir · Reinhard Heckel · Paul Hand · Arian Maleki · Richard Baraniuk
Dec 16, 7:00 AM - 3:00 PM Room 214

Inverse problems are ubiquitous in science, medicine, and engineering,and research in this area has produced real-world impact in medical tomography, seismic imaging, computational photography, and other domains. The recent rapid progress in learning-based image generation raises exciting opportunities in inverse problems, and this workshop seeks to gather a diverse set of participants who apply machine learning to inverse problems, from mathematicians and computer scientists to physicists and biologists. This gathering will facilitate new collaborations and will help develop more effective, reliable, and trustworthy learning-based solutions to inverse problems.

Show more
View full details
Workshop

Machine Learning for Systems

Xinlei XU · Dan Zhang · Mangpo Phothilimthana · Beidi Chen · Yawen Wang · Divya Mahajan
Dec 16, 7:00 AM - 3:00 PM Room 211 - 213

Machine Learning (ML) for Systems describes the application of machine learning techniques to problems related to computer systems. By leveraging supervised learning and reinforcement learning (RL) approaches, machine learning can replace longstanding heuristics that currently drive many of these systems. This includes a wide range of topics, including multi-objective tasks such as designing new data structures, integrated circuits, or design verification, as well as implementing control algorithms for applications such as compilers, databases, memory management, or ML frameworks. While the systems community increasingly recognizes the importance of ML in solving a variety of different systems problems, ML for Systems remains an emerging area without widely established best practices, methods and strategies for the application of state-of-the-art machine learning techniques. The goal of this workshop is to provide an interdisciplinary venue for ML and Systems experts to push this boundary and start new directions within the ML for Systems area.

Show more
View full details