Competition: Causal Insights for Learning Paths in Education Tue 6 Dec 05:00 a.m.
In this competition, participants will address two fundamental causal challenges in machine learning in the context of education using time-series data. The first is to identify the causal relationships between different constructs, where a construct is defined as the smallest element of learning. The second challenge is to predict the impact of learning one construct on the ability to answer questions on other constructs. Addressing these challenges will enable optimisation of students' knowledge acquisition, which can be deployed in a real edtech solution impacting millions of students. Participants will run these tasks in an idealised environment with synthetic data and a real-world scenario with evaluation data collected from a series of A/B tests.
Competition: IGLU: Interactive Grounded Language Understanding in a Collaborative Environment Tue 6 Dec 05:00 a.m.
Human intelligence has the remarkable ability to quickly adapt to new tasks and environments. Starting from a very young age, humans acquire new skills and learn how to solve new tasks either by imitating the behavior of others or by following provided natural language instructions. To facilitate research in this direction, we propose IGLU: Interactive Grounded Language Understanding in a Collaborative Environment.The primary goal of the competition is to approach the problem of how to develop interactive embodied agents that learn to solve a task while provided with grounded natural language instructions in a collaborative environment. Understanding the complexity of the challenge, we split it into sub-tasks to make it feasible for participants. This research challenge is naturally related, but not limited, to two fields of study that are highly relevant to the NeurIPS community: Natural Language Understanding and Generation (NLU/G) and Reinforcement Learning (RL). Therefore, the suggested challenge can bring two communities together to approach one of the important challenges in AI. Another important aspect of the challenge is the dedication to perform a human-in-the-loop evaluation as a final evaluation for the agents developed by contestants.
Cross-Domain MetaDL: Any-Way Any-Shot Learning Competition with Novel Datasets from Practical Domains Tue 6 Dec 05:00 a.m.
Meta-learning aims to leverage the experience from previous tasks to solve new tasks using only little training data, train faster and/or get better performance. The proposed challenge focuses on "cross-domain meta-learning" for few-shot image classification using a novel "any-way" and "any-shot" setting. The goal is to meta-learn a good model that can quickly learn tasks from a variety of domains, with any number of classes also called "ways" (within the range 2-20) and any number of training examples per class also called "shots" (within the range 1-20). We carve such tasks from various "mother datasets" selected from diverse domains, such as healthcare, ecology, biology, manufacturing, and others. By using mother datasets from these practical domains, we aim to maximize the humanitarian and societal impact. The competition is with code submission, fully blind-tested on the CodaLab challenge platform. A single (final) submission will be evaluated during the final phase, using ten datasets previously unused by the meta-learning community. After the competition is over, it will remain active to be used as a long-lasting benchmark resource for research in this field. The scientific and technical motivations of this challenge include scalability, robustness to domain changes, and generalization ability to tasks (a.k.a. episodes) in different regimes (any-way any-shot).
Competition: Traffic4cast 2022 – Predict Dynamics along Graph Edges from Sparse Node Data: Whole City Traffic and ETA from simple Road Counters Tue 6 Dec 05:00 a.m.
The global trends of urbanization and increased personal mobility force us to rethink the way we live and use urban space. The Traffic4cast competition series tackle this problem in a data driven way, advancing the latest methods in modern machine learning for modelling complex spatial systems over time. This year, our dynamic road graph data combine information from road maps, 10^12 location probe data points, and car loop counters in three entire cities for two years. While loop counters are the most accurate way to capture the traffic volume they are only available in some locations. Traffic4cast 2022 explores models that have the ability to generalize loosely related temporal vertex data on just a few nodes to predict dynamic future traffic states on the edges of the entire road graph. Specifically, in our core challenge we invite participants to predict for three cities the congestion classes known from the red, yellow, or green colouring of roads on a common traffic map for the entire road graph 15min into the future. We provide car count data from spatially sparse loop counters in these three cities in 15min aggregated time bins for one hour prior to the prediction time slot. For our extended challenge participants are asked to predict the actual average speeds on each road segment in the graph 15min into the future.
Competition: Real Robot Challenge III - Learning Dexterous Manipulation from Offline Data in the Real World Tue 6 Dec 05:00 a.m.
In this year's Real Robot Challenge, the participants will apply offline reinforcement learning (RL) to robotics datasets and evaluate their policies remotely on a cluster of real TriFinger robots. Usually, experimentation on real robots is quite costly and challenging. For this reason, a large part of the RL community uses simulators to develop and benchmark algorithms. However, insights gained in simulation do not necessarily translate to real robots, in particular for tasks involving complex interaction with the environment. The purpose of this competition is to alleviate this problem by allowing participants to experiment remotely with a real robot - as easily as in simulation. In the last two years, offline RL algorithms became increasingly popular and capable. This year’s Real Robot Challenge provides a platform for evaluation, comparison and showcasing the performance of these algorithms on real-world data. In particular, we propose a dexterous manipulation problem that involves pushing, grasping and in-hand orientation of blocks.
The MineRL BASALT Competition on Fine-tuning from Human Feedback Tue 6 Dec 05:00 a.m.
Given the impressive capabilities demonstrated by pre-trained foundation models, we must now grapple with how to harness these capabilities towards useful tasks. Since many such tasks are hard to specify programmatically, researchers have turned towards a different paradigm: fine-tuning from human feedback. The MineRL BASALT competition aims to spur research on this important class of techniques, in the domain of the popular video game Minecraft.The competition consists of a suite of four tasks with hard-to-specify reward functions.We define these tasks by a paragraph of natural language: for example, "create a waterfall and take a scenic picture of it", with additional clarifying details. Participants train a separate agent for each task, using any method they want; we expect participants will choose to fine-tune the provided pre-trained models. Agents are then evaluated by humans who have read the task description. To help participants get started, we provide a dataset of human demonstrations of the four tasks, as well as an imitation learning baseline that leverages these demonstrations.We believe this competition will improve our ability to build AI systems that do what their designers intend them to do, even when intent cannot be easily formalized. This achievement will allow AI to solve more tasks, enable more effective regulation of AI systems, and make progress on the AI alignment problem.
Competition: Driving SMARTS Tue 6 Dec 07:00 a.m.
Driving SMARTS is a regular competition designed to tackle problems caused by the distribution shift in dynamic interaction contexts that are prevalent in real-world autonomous driving (AD). The proposed competition supports methodologically diverse solutions, such as reinforcement learning (RL) and offline learning methods, trained on a combination of naturalistic AD data and open-source simulation platform SMARTS. The two-track structure allows focusing on different aspects of the distribution shift. Track 1 is open to any method and will give ML researchers with different backgrounds an opportunity to solve a real-world autonomous driving challenge. Track 2 is designed for strictly offline learning methods. Therefore, direct comparisons can be made between different methods with the aim to identify new promising research directions. The proposed setup consists of 1) realistic traffic generated using real-world data and micro simulators to ensure fidelity of the scenarios, 2) framework accommodating diverse methods for solving the problem, and 3) a baseline method. As such it provides a unique opportunity for the principled investigation into various aspects of autonomous vehicle deployment.
Spotlight: Featured Papers Panels 1B Tue 6 Dec 11:00 a.m.
Each panel session is split into four 30-minute blocks composed of a set of lightning talks and a deep dive session related to similar topics. The deep dive will begin immediately after lightning talks and the related Q&A (it might be before the 15 min are over). We will not take any questions via microphone but ask you to use slido (see embedding below or go to https://slido.com and use keyword #neurips22). If you are a presenter or moderator, you should see a zoom link that you can use to join the session for Q&A.
Finally some important don'ts: DO NOT share any zoom or slido information publicly. DO NOT join zoom if you are not presenting or moderating.
- [ 64787 ] OOD Link Prediction Generalization Capabilities of Message-Passing GNNs in Larger Test Graphs
- [ 64788 ] Predictive Coding beyond Gaussian Distributions
- [ 64790 ] What Makes Graph Neural Networks Miscalibrated?
- [ 64791 ] NodeFormer: A Scalable Graph Structure Learning Transformer for Node Classification
- [ 64792 ] Redundancy-Free Message Passing for Graph Neural Networks
- [ 64794 ] EvenNet: Ignoring Odd-Hop Neighbors Improves Robustness of Graph Neural Networks
- [ 64796 ] Probing Classifiers are Unreliable for Concept Removal and Detection
Q&A on RocketChat immediately following Lightning Talks
[ Virtual ]
- [ 64803 ] Weisfeiler and Leman Go Walking: Random Walk Kernels Revisited
- [ 64804 ] Non-Gaussian Tensor Programs
- [ 64808 ] Chromatic Correlation Clustering, Revisited
Q&A on RocketChat immediately following Lightning Talks
[ Virtual ]
- [ 64834 ] Alleviating the Sample Selection Bias in Few-shot Learning by Removing Projection to the Centroid
- [ 64835 ] Improving Task-Specific Generalization in Few-Shot Learning via Adaptive Vicinal Risk Minimization
- [ 64836 ] An Investigation into Whitening Loss for Self-supervised Learning
- [ 64837 ] Task-Free Continual Learning via Online Discrepancy Distance Learning
- [ 64838 ] Efficient Knowledge Distillation from Model Checkpoints
- [ 64839 ] Improving Out-of-Distribution Generalization by Adversarial Training with Structured Priors
- [ 64841 ] Cost-Sensitive Self-Training for Optimizing Non-Decomposable Metrics
Q&A on RocketChat immediately following Lightning Talks
[ Virtual ]
- [ 64842 ] Attracting and Dispersing: A Simple Approach for Source-free Domain Adaptation
- [ 64843 ] Improved Fine-Tuning by Better Leveraging Pre-Training Data
- [ 64845 ] Task Discovery: Finding the Tasks that Neural Networks Generalize on
- [ 64846 ] Coresets for Relational Data and The Applications
- [ 64847 ] Teacher Forcing Recovers Reward Functions for Text Generation
- [ 64849 ] UniGAN: Reducing Mode Collapse in GANs using a Uniform Generator
- [ 64850 ] Gradient Estimation with Discrete Stein Operators
- [ 64851 ] Detecting Abrupt Changes in Sequential Pairwise Comparison Data
- [ 64852 ] Continuous Deep Q-Learning in Optimal Control Problems: Normalized Advantage Functions Analysis
Q&A on RocketChat immediately following Lightning Talks
Featured Papers Panels 1C Tue 6 Dec 11:00 a.m.
Each panel session is split into four 30-minute blocks composed of a set of lightning talks and a deep dive session related to similar topics. The deep dive will begin immediately after lightning talks and the related Q&A (it might be before the 15 min are over). We will not take any questions via microphone but ask you to use slido (see embedding below or go to https://slido.com and use keyword #neurips22). If you are a presenter or moderator, you should see a zoom link that you can use to join the session for Q&A.
Finally some important don'ts: DO NOT share any zoom or slido information publicly. DO NOT join zoom if you are not presenting or moderating.
[ Virtual ]
[ Virtual ]
[ Virtual ]
[ Virtual ]
[ Virtual ]
[ Virtual ]
[ Virtual ]
Spotlight: Featured Papers Panels 1A Tue 6 Dec 11:00 a.m.
Each panel session is split into four 30-minute blocks composed of a set of lightning talks and a deep dive session related to similar topics. The deep dive will begin immediately after lightning talks and the related Q&A (it might be before the 15 min are over). We will not take any questions via microphone but ask you to use slido (see embedding below or go to https://slido.com and use keyword #neurips22). If you are a presenter or moderator, you should see a zoom link that you can use to join the session for Q&A.
Finally some important don'ts: DO NOT share any zoom or slido information publicly. DO NOT join zoom if you are not presenting or moderating.
- [ 64766 ] Transfer Learning in Information Criteria-based Feature Selection
- [ 64768 ] tntorch: Tensor Network Learning with PyTorch
- [ 64769 ] InterpretDL: Explaining Deep Models in PaddlePaddle
- [ 64774 ] [Re] Value Alignment Verification
Q&A on RocketChat immediately following Lightning Talks
[ Virtual ]
- [ 64777 ] [Re] Projection-based Algorithm for Updating the TruncatedSVD of Evolving Matrices
- [ 64780 ] [Re] Learning Unknown from Correlations: Graph Neural Network for Inter-novel-protein Interaction Prediction
Q&A on RocketChat immediately following Lightning Talks
[ Virtual ]
- [ 64809 ] Active Bayesian Causal Inference
- [ 64810 ] Causal Discovery in Heterogeneous Environments Under the Sparse Mechanism Shift Hypothesis
- [ 64811 ] Disentangling Causal Effects from Sets of Interventions in the Presence of Unobserved Confounders
- [ 64812 ] Provable Benefit of Multitask Representation Learning in Reinforcement Learning
- [ 64813 ] Self-Supervised Learning via Maximum Entropy Coding
- [ 64816 ] Counterfactual Temporal Point Processes
- [ 64817 ] Provable Subspace Identification Under Post-Nonlinear Mixtures
- [ 64818 ] VF-PS: How to Select Important Participants in Vertical Federated Learning, Efficiently and Securely?
- [ 64819 ] Learning Multi-resolution Functional Maps with Spectral Attention for Robust Shape Matching
Q&A on RocketChat immediately following Lightning Talks
[ Virtual ]
- [ 64820 ] CoPur: Certifiably Robust Collaborative Inference via Feature Purification
- [ 64821 ] Data Augmentation MCMC for Bayesian Inference from Privatized Data
- [ 64822 ] Towards Trustworthy Automatic Diagnosis Systems by Emulating Doctors' Reasoning with Deep Reinforcement Learning
- [ 64824 ] On the Learning Mechanisms in Physical Reasoning
- [ 64825 ] Align then Fusion: Generalized Large-scale Multi-view Clustering with Anchor Matching Correspondences
- [ 64827 ] GMMSeg: Gaussian Mixture based Generative Semantic Segmentation Models
- [ 64828 ] Conformal Off-Policy Prediction in Contextual Bandits
- [ 64830 ] A Non-asymptotic Analysis of Non-parametric Temporal-Difference Learning
Q&A on RocketChat immediately following Lightning Talks
Competition: MyoChallenge: Learning contact-rich manipulation using a musculoskeletal hand Tue 6 Dec 03:00 p.m.
Manual dexterity has been considered one of the critical components for human evolution. The ability to perform movements as simple as holding and rotating an object in the hand without dropping it needs the coordination of more than 35 muscles which, act synergistically or antagonistically on multiple joints. They control the flexion and extension of the joints connecting the bones which in turn allow the manipulation to happen. This complexity in control is markedly different than typical pre-specified movements or torque based controls used in robotics. In this competition - MyoChallenge, participant will develop controllers for a realistic hand to solve a series of dexterous manipulation tasks. Participant will be provided with a physiologically accurate and efficient neuromusculoskeletal human hand model developed in the (free) MuJoCo physics simulator. In addition the provided model has also contact rich capabilities. Participant will be interfacing with a standardized training environment to help build the controllers. The final score will then be based on a environment with unknown parameters.This challenge builds on 3 previous NeurIPS challenge on controlling legs mus- culoskeletal model for locomotion, which attracted about 1300 participants and generated 8000 submissions, which produced 9 academic publications. This chal- lenge will leverage the experience and knowledge from the previous challenges and will further establish neuromusculoskeletal modelling as a benchmarks for the neuromuscular control and machine learning community.In addition of providing challenges for the biomechanics and machine learning community, this challenge will provide new opportunities to explore solutions that will inspire the robotic, medical and rehabilitation fields on one of the most complex dexterous skills humans are able to perform.
The SENSORIUM competition on predicting large scale mouse primary visual cortex activity Tue 6 Dec 03:00 p.m.
The experimental study of neural information processing in the biological visual system is challenging due to the nonlinear nature of neuronal responses to visual input. Artificial neural networks play a dual role in improving our understanding of this complex system, not only allowing computational neuroscientists to build predictive digital twins for novel hypothesis generation in silico, but also allowing machine learning to progressively bridge the gap between biological and machine vision. The mouse has recently emerged as a popular model system to study visual information processing, but no standardized large-scale benchmark to identify state-of-the-art models of the mouse visual system has been established. To fill this gap, we propose the sensorium benchmark competition. We collected a large-scale dataset from mouse primary visual cortex containing the responses of more than 28,000 neurons across seven mice stimulated with thousands of natural images.Using this dataset, we will host two benchmark tracks to find the best predictive models of neuronal responses on a held-out test set. The two tracks differ in whether measured behavior signals are made available or not. We provide code, tutorials, and pre-trained baseline models to lower the barrier for entering the competition. Beyond this proposal, our goal is to keep the accompanying website open with new yearly challenges for it to become a standard tool to measure progress in large-scale neural system identification models of the entire mouse visual hierarchy and beyond.
Spotlight: Featured Papers Panels 2B Tue 6 Dec 07:00 p.m.
Each panel session is split into four 30-minute blocks composed of a set of lightning talks and a deep dive session related to similar topics. The deep dive will begin immediately after lightning talks and the related Q&A (it might be before the 15 min are over). We will not take any questions via microphone but ask you to use slido (see embedding below or go to https://slido.com and use keyword #neurips22). If you are a presenter or moderator, you should see a zoom link that you can use to join the session for Q&A.
Finally some important don'ts: DO NOT share any zoom or slido information publicly. DO NOT join zoom if you are not presenting or moderating.
- [ 64876 ] Cross Aggregation Transformer for Image Restoration
- [ 64877 ] BiMLP: Compact Binary Architectures for Vision Multi-Layer Perceptrons
- [ 64878 ] Inception Transformer
- [ 64879 ] GhostNetV2: Enhance Cheap Operation with Long-Range Attention
- [ 64880 ] MCMAE: Masked Convolution Meets Masked Autoencoders
- [ 64881 ] Deep Fourier Up-Sampling
- [ 64883 ] SKFlow: Learning Optical Flow with Super Kernels
- [ 64885 ] RTFormer: Efficient Design for Real-Time Semantic Segmentation with Transformer
Q&A on RocketChat immediately following Lightning Talks
[ Virtual ]
- [ 64887 ] ComGAN: Unsupervised Disentanglement and Segmentation via Image Composition
- [ 64888 ] GLIF: A Unified Gated Leaky Integrate-and-Fire Neuron for Spiking Neural Networks
- [ 64890 ] One Inlier is First: Towards Efficient Position Encoding for Point Cloud Registration
- [ 64891 ] Log-Polar Space Convolution Layers
- [ 64892 ] Streaming Radiance Fields for 3D Video Synthesis
- [ 64893 ] Flexible Neural Image Compression via Code Editing
- [ 64894 ] AutoST: Towards the Universal Modeling of Spatio-temporal Sequences
Q&A on RocketChat immediately following Lightning Talks
[ Virtual ]
- [ 64921 ] Understanding Square Loss in Training Overparametrized Neural Network Classifiers
- [ 64922 ] UMIX: Improving Importance Weighting for Subpopulation Shift via Uncertainty-Aware Mixup
- [ 64923 ] LOG: Active Model Adaptation for Label-Efficient OOD Generalization
- [ 64927 ] MetaMask: Revisiting Dimensional Confounder for Self-Supervised Learning
- [ 64929 ] Distributionally Robust Optimization with Data Geometry
Q&A on RocketChat immediately following Lightning Talks
[ Virtual ]
- [ 64932 ] Estimating Noise Transition Matrix with Label Correlations for Noisy Multi-Label Learning
- [ 64933 ] Estimating graphical models for count data with applications to single-cell gene network
- [ 64934 ] Concentration of Data Encoding in Parameterized Quantum Circuits
- [ 64935 ] On Learning Fairness and Accuracy on Multiple Subgroups
- [ 64936 ] Panchromatic and Multispectral Image Fusion via Alternating Reverse Filtering Network
- [ 64938 ] Doubly Robust Counterfactual Classification
- [ 64939 ] Trading off Image Quality for Robustness is not Necessary with Regularized Deterministic Autoencoders
Q&A on RocketChat immediately following Lightning Talks
Featured Papers Panels 2C Tue 6 Dec 07:00 p.m.
Each panel session is split into four 30-minute blocks composed of a set of lightning talks and a deep dive session related to similar topics. The deep dive will begin immediately after lightning talks and the related Q&A (it might be before the 15 min are over). We will not take any questions via microphone but ask you to use slido (see embedding below or go to https://slido.com and use keyword #neurips22). If you are a presenter or moderator, you should see a zoom link that you can use to join the session for Q&A.
Finally some important don'ts: DO NOT share any zoom or slido information publicly. DO NOT join zoom if you are not presenting or moderating.
[ Virtual ]
[ Virtual ]
[ Virtual ]
[ Virtual ]
[ Virtual ]
[ Virtual ]
[ Virtual ]
Spotlight: Featured Papers Panels 2A Tue 6 Dec 07:00 p.m.
Each panel session is split into four 30-minute blocks composed of a set of lightning talks and a deep dive session related to similar topics. The deep dive will begin immediately after lightning talks and the related Q&A (it might be before the 15 min are over). We will not take any questions via microphone but ask you to use slido (see embedding below or go to https://slido.com and use keyword #neurips22). If you are a presenter or moderator, you should see a zoom link that you can use to join the session for Q&A.
Finally some important don'ts: DO NOT share any zoom or slido information publicly. DO NOT join zoom if you are not presenting or moderating.
- [ 64853 ] Benefits of Additive Noise in Composing Classes with Bounded Capacity
- [ 64857 ] Posterior Matching for Arbitrary Conditioning
- [ 64858 ] "Lossless" Compression of Deep Neural Networks: A High-dimensional Neural Tangent Kernel Approach
- [ 64859 ] Lipschitz Bandits with Batched Feedback
- [ 64860 ] Re-Analyze Gauss: Bounds for Private Matrix Approximation via Dyson Brownian Motion
- [ 64861 ] Why Robust Generalization in Deep Learning is Difficult: Perspective of Expressive Power
- [ 64862 ] Approaching Quartic Convergence Rates for Quasi-Stochastic Approximation with Application to Gradient-Free Optimization
- [ 64863 ] Blessing of Depth in Linear Regression: Deeper Models Have Flatter Landscape Around the True Solution
Q&A on RocketChat immediately following Lightning Talks
[ Virtual ]
- [ 64865 ] Robust Learning against Relational Adversaries
- [ 64867 ] SNN-RAT: Robustness-enhanced Spiking Neural Network through Regularized Adversarial Training
- [ 64869 ] Trustworthy Monte Carlo
- [ 64870 ] Sampling from Log-Concave Distributions with Infinity-Distance Guarantees
- [ 64871 ] Can Adversarial Training Be Manipulated By Non-Robust Features?
- [ 64873 ] Causality Preserving Chaotic Transformation and Classification using Neurochaos Learning
Q&A on RocketChat immediately following Lightning Talks
[ Virtual ]
- [ 64897 ] Hyper-Representations as Generative Models: Sampling Unseen Neural Network Weights
- [ 64899 ] NeMF: Neural Motion Fields for Kinematic Animation
- [ 64901 ] Brain Network Transformer
- [ 64902 ] Distilling Representations from GAN Generator via Squeeze and Span
- [ 64904 ] Graph Neural Networks with Adaptive Readouts
- [ 64906 ] Could Giant Pre-trained Image Models Extract Universal Representations?
- [ 64907 ] On the relationship between variational inference and auto-associative memory
Q&A on RocketChat immediately following Lightning Talks
[ Virtual ]
- [ 64908 ] Diagonal State Spaces are as Effective as Structured State Spaces
- [ 64911 ] You Never Stop Dancing: Non-freezing Dance Generation via Bank-constrained Manifold Projection
- [ 64913 ] Wasserstein Iterative Networks for Barycenter Estimation
- [ 64914 ] Neural Approximation of Graph Topological Features
- [ 64915 ] Regularized Molecular Conformation Fields
- [ 64916 ] Is a Modular Architecture Enough?
- [ 64917 ] Knowledge-Aware Bayesian Deep Topic Model
- [ 64918 ] Deterministic Langevin Monte Carlo with Normalizing Flows for Bayesian Inference
Q&A on RocketChat immediately following Lightning Talks