Skip to yearly menu bar Skip to main content


( events)   Timezone:  
Workshop
Sat Dec 03 07:00 AM -- 03:00 PM (PST) @ Room 388 - 390 None
Workshop on Distribution Shifts: Connecting Methods and Applications
Chelsea Finn · Fanny Yang · Hongseok Namkoong · Masashi Sugiyama · Jacob Eisenstein · Jonas Peters · Rebecca Roelofs · Shiori Sagawa · Pang Wei Koh · Yoonho Lee





Workshop Home Page

This workshop brings together domain experts and ML researchers working on mitigating distribution shifts in real-world applications.

Distribution shifts—where a model is deployed on a data distribution different from what it was trained on—pose significant robustness challenges in real-world ML applications. Such shifts are often unavoidable in the wild and have been shown to substantially degrade model performance in applications such as biomedicine, wildlife conservation, sustainable development, robotics, education, and criminal justice. For example, models can systematically fail when tested on patients from different hospitals or people from different demographics.

This workshop aims to convene a diverse set of domain experts and methods-oriented researchers working on distribution shifts. We are broadly interested in methods, evaluations and benchmarks, and theory for distribution shifts, and we are especially interested in work on distribution shifts that arise naturally in real-world application contexts. Examples of relevant topics include, but are not limited to:
- Examples of real-world distribution shifts in various application areas. We especially welcome applications that are not widely discussed in the ML research community, e.g., education, sustainable development, and conservation. We encourage submissions that characterize distribution shifts and their effects in real-world applications; it is not at all necessary to propose a solution that is algorithmically novel.
- Methods for improving robustness to distribution shifts. Relevant settings include domain generalization, domain adaptation, and subpopulation shifts, and we are interested in a wide range of approaches, from uncertainty estimation to causal inference to active data collection. We welcome methods that can work across a variety of shifts, as well as more domain-specific methods that incorporate prior knowledge on the types of shifts we wish to be robust on. We encourage evaluating these methods on real-world distribution shifts.
- Empirical and theoretical characterization of distribution shifts. Distribution shifts can vary widely in the way in which the data distribution changes, as well as the empirical trends they exhibit. What empirical trends do we observe? What empirical or theoretical frameworks can we use to characterize these different types of shifts and their effects? What kinds of theoretical settings capture useful components of real-world distribution shifts?
- Benchmarks and evaluations. We especially welcome contributions for subpopulation shifts, as they are underrepresented in current ML benchmarks. We are also interested in evaluation protocols that move beyond the standard assumption of fixed training and test splits -- for which applications would we need to consider other forms of shifts, such as streams of continually-changing data or feedback loops between models and data?

Opening Remarks (Opening remarks for DistShift 2022)
Domain Adaptation: Theory, Algorithms, and Open Library (Invited Talk)
Machine-learning, distribution shifts and extrapolation in the Earth System (Invited Talk)
Coffee Break
The promises and pitfalls of CVAR (Invited Talk)
Panel Discussion (In-person Panel Discussion)
Lunch Break
Poster Session
First Steps Toward Understanding the Extrapolation of Nonlinear Models to Unseen Domains (Spotlight)
Learning Invariant Representations under General Interventions on the Response (Spotlight)
CAREER: Economic Prediction of Labor Sequence Data Under Distribution Shift (Spotlight)
Tackling Distribution Shifts in Federated Learning with Superquantile Aggregation (Spotlight)
Domain-Adjusted Regression or: ERM May Already Learn Features Sufficient for Out-of-Distribution Generalization (Spotlight)
Data Feedback Loops: Model-driven Amplification of Dataset Biases (Spotlight)
Coffee Break
External Validity: Framework, Design, and Analysis (Invited Talk)
Bringing real-world data to bear in addressing distribution shifts: a sociolinguistically-informed analysis of ASR errors (Invited Talk)
Geospatial Distribution Shifts in Ecology: Mapping the Urban Forest (Invited Talk)
Closing Remarks (Closing remarks for DistShift 2022)
Domain Generalization for Robust Model-Based Offline Reinforcement Learning (Poster)
Multi-Domain Long-Tailed Learning by Augmenting Disentangled Representations (Poster)
Meta-Adaptive Stock Movement Prediction with Two-Stage Representation Learning (Poster)
Scale-conditioned Adaptation for Large Scale Combinatorial Optimization (Poster)
On the Abilities of Mathematical Extrapolation with Implicit Models (Poster)
Malign Overfitting: Interpolation and Invariance are Fundamentally at Odds (Poster)
Estimation of prediction error with known covariate shift (Poster)
A Synthetic Limit Order Book Dataset for Benchmarking Forecasting Algorithms under Distributional Shift (Poster)
A Closer Look at Model Adaptation using Feature Distortion and Simplicity Bias (Poster)
Task Modeling: Approximating Multitask Predictions for Cross-Task Transfer (Poster)
Generative Posterior Networks for Approximately Bayesian Epistemic Uncertainty Estimation (Poster)
Graph-Relational Distributionally Robust Optimization (Poster)
A Unified Framework for Comparing Learning Algorithms (Poster)
Domain Generalization with Nuclear Norm Regularization (Poster)
Invariant Feature Subspace Recovery for Multi-Class Classification (Poster)
Out-of-Distribution Robustness via Targeted Augmentations (Poster)
Pushing the Accuracy-Fairness Tradeoff Frontier with Introspective Self-play (Poster)
Reducing Forgetting in Federated Learning with Truncated Cross-Entropy (Poster)
Learning to Extrapolate: A Transductive Approach (Poster)
Surgical Fine-Tuning Improves Adaptation to Distribution Shifts (Poster)
Characterising the Robustness of Reinforcement Learning for Continuous Control using Disturbance Injection (Poster)
Class-wise Domain Generalization: A Novel Framework for Evaluating Distributional Shift (Poster)
Memory bounds for continual learning (Poster)
Tailored Overlap for Learning Under Distribution Shift (Poster)
Few-Shot Learnable Augmentation for Financial Time Series Prediction under Distribution Shifts (Poster)
Mechanistic Lens on Mode Connectivity (Poster)
Is Unsupervised Performance Estimation Impossible When Both Covariates and Labels shift? (Poster)
First Steps Toward Understanding the Extrapolation of Nonlinear Models to Unseen Domains (Poster)
DrML: Diagnosing and Rectifying Vision Models using Language (Poster)
Empirical Study on Optimizer Selection for Out-of-Distribution Generalization (Poster)
Choosing Public Datasets for Private Machine Learning via Gradient Subspace Distance (Poster)
Learning Invariant Representations under General Interventions on the Response (Poster)
Theory and Algorithm for Batch Distribution Drift Problems (Poster)
Enabling the Visualization of Distributional Shift using Shapley Values (Poster)
Frequency Shortcut Learning in Neural Networks (Poster)
Preserving privacy with PATE for heterogeneous data (Poster)
Robustmix: Improving Robustness by Regularizing the Frequency Bias of Deep Nets (Poster)
Visual response inhibition for increased robustness of convolutional networks to distribution shifts (Poster)
AdaME: Adaptive learning of multisource adaptationensembles (Poster)
Transferability Between Regression Tasks (Poster)
CAREER: Economic Prediction of Labor Sequence Data Under Distribution Shift (Poster)
Out-of-Distribution Generalization Challenge in Dialog State Tracking (Poster)
Diversity Boosted Learning for Domain Generalization with A Large Number of Domains (Poster)
Learning with noisy labels using low-dimensional model trajectory (Poster)
Evaluating the Impact of Geometric and Statistical Skews on Out-Of-Distribution Generalization Performance (Poster)
Strategy-Aware Contextual Bandits (Poster)
Learning an Invertible Output Mapping Can Mitigate Simplicity Bias in Neural Networks (Poster)
Useful Confidence Measures: Beyond the Max Score (Poster)
Federated Learning under Distributed Concept Drift (Poster)
An Invariant Learning Characterization of Controlled Text Generation (Poster)
Tackling Distribution Shifts in Federated Learning with Superquantile Aggregation (Poster)
Few Shot Generative Domain Adaptation Via Inference-Stage Latent Learning in GANs (Poster)
Relational Out-of-Distribution Generalization (Poster)
Domain-Adjusted Regression or: ERM May Already Learn Features Sufficient for Out-of-Distribution Generalization (Poster)
Test-time adaptation with slot-centric models (Poster)
Diversity through Disagreement for Better Transferability (Poster)
Env-Aware Anomaly Detection: Ignore Style Changes, Stay True to Content! (Poster)
Toward domain generalized pruning by scoring out-of-distribution importance (Poster)
Active Learning Over Multiple Domains in Natural Language Tasks (Poster)
Adaptive Sampling for Probabilistic Forecasting under Distribution Shift (Poster)
A Learning Based Hypothesis Test for Harmful Covariate Shift (Poster)
Engineering Uncertainty Representations to Monitor Distribution Shifts (Poster)
Data Feedback Loops: Model-driven Amplification of Dataset Biases (Poster)
"Why did the Model Fail?": Attributing Model Performance Changes to Distribution Shifts (Poster)
A Reproducible and Realistic Evaluation of Partial Domain Adaptation Methods (Poster)
Sparse Mixture-of-Experts are Domain Generalizable Learners (Poster)
Deep Class-Conditional Gaussians for Continual Learning (Poster)
A Closer Look at Novel Class Discovery from the Labeled Set (Poster)
Momentum-based Weight Interpolation of Strong Zero-Shot Models for Continual Learning (Poster)
Instance norm improves meta-learning in class-imbalanced land cover classification (Poster)
CUDA: Curriculum of Data Augmentation for Long-tailed Recognition (Poster)
Benchmarking Robustness under Distribution Shift of Multimodal Image-Text Models (Poster)
Sorted eigenvalue comparison dEigdEig: A simple alternative to dFIDdFID (Poster)
HICO-DET-SG and V-COCO-SG: New Data Splits to Evaluate Systematic Generalization in Human-Object Interaction Detection (Poster)
Undersampling is a Minimax Optimal Robustness Intervention in Nonparametric Classification (Poster)
Cross-Dataset Propensity Estimation for Debiasing Recommender Systems (Poster)
Multiple Modes for Continual Learning (Poster)
A new benchmark for group distribution shifts in hand grasp regression for object manipulation. Can meta-learning raise the bar? (Poster)
Explanation Shift: Detecting distribution shifts on tabular data via the explanation space (Poster)
Augmentation Consistency-guided Self-training for Source-free Domain Adaptive Semantic Segmentation (Poster)
An Empirical Study on Distribution Shift Robustness From the Perspective of Pre-Training and Data Augmentation (Poster)
Characterizing Anomalies with Explainable Classifiers (Poster)
Performative Prediction with Neural Networks (Poster)
Improving Domain Generalization with Interpolation Robustness (Poster)
Deconstructing Distributions: A Pointwise Framework of Learning (Poster)
Bitrate-Constrained DRO: Beyond Worst Case Robustness To Unknown Group Shifts (Poster)
Impact of realistic properties of the point spread function on classification tasks to reveal a possible distribution shift (Poster)
A Simple Baseline that Questions the Use of Pretrained-Models in Continual Learning (Poster)
RLSBench: A Large-Scale Empirical Study of Domain Adaptation Under Relaxed Label Shift (Poster)
Mitigating Dataset Bias by Using Per-sample Gradient (Poster)
Train Offline, Test Online: A Real Robot Learning Benchmark (Poster)
The Value of Out-of-distribution Data (Poster)
Reliability benchmarks for image segmentation (Poster)
Adaptive Pre-training of Language Models for Better Logical Reasoning (Poster)
Using Interventions to Improve Out-of-Distribution Generalization of Text-Matching Systems (Poster)
Dropout Disagreement: A Recipe for Group Robustness with Fewer Annotations (Poster)