Competition: Weather4cast - Super-Resolution Rain Movie Prediction under Spatio-temporal Shifts Thu 8 Dec 05:00 a.m.
The Weather4cast NeurIPS Competition has high practical impact for society: Unusual weather is increasing all over the world, reflecting ongoing climate change, and affecting communities in agriculture, transport, public health and safety, etc.Can you predict future rain patterns with modern machine learning algorithms? Apply spatio-temporal modelling to complex dynamic systems. Get access to unique large-scale data and demonstrate temporal and spatial transfer learning under strong distributional shifts.We provide a super-resolution challenge of high relevance to local events: Predict future weather as measured by ground-based hi-res rain radar weather stations.In addition to movies comprising rain radar maps you get large-scale multi-band satellite sensor images for exploiting data fusion.Winning models will advance key areas of methods research in machine learning, of relevance beyond the application domain.
Social: How to negotiate industry offers Thu 8 Dec 11:00 a.m.
Join the team at Rora and 81cents, to get the tools, information, and data you need to negotiate your next offer in AI more confidently.
Some of the topics we'll cover in a 1.5 hr. period (with 1/2 an hour for Q&A) are:
- Understanding the fundamentals of compensation in tech (particularly around equity, bonus structures, etc.)
- How to get over your fears of negotiating
- How to decide which company / offer is right for you
- How to negotiate without counter offers and without knowing ""market value""
- How to respond to pushback from recruiters and other guilt tripping / lowballing /pressure tactics
- How to avoid having an offer rescinded
- How to negotiate deadline of an offer
- Walking through a timeline of the negotiation process for a new offer
Spotlight: Featured Papers Panels 5A Thu 8 Dec 11:00 a.m.
Each panel session is split into four 30-minute blocks composed of a set of lightning talks and a deep dive session related to similar topics. The deep dive will begin immediately after lightning talks and the related Q&A (it might be before the 15 min are over). We will not take any questions via microphone but ask you to use slido (see embedding below or go to https://slido.com and use keyword #neurips22). If you are a presenter or moderator, you should see a zoom link that you can use to join the session for Q&A.
Finally some important don'ts: DO NOT share any zoom or slido information publicly. DO NOT join zoom if you are not presenting or moderating.
- [ 65116 ] DOMINO: Decomposed Mutual Information Optimization for Generalized Context in Meta-Reinforcement Learning
- [ 65117 ] Explicable Policy Search
- [ 65120 ] CUP: Critic-Guided Policy Reuse
- [ 65121 ] TarGF: Learning Target Gradient Field for Object Rearrangement
- [ 65122 ] RORL: Robust Offline Reinforcement Learning via Conservative Smoothing
- [ 65123 ] When to Trust Your Simulator: Dynamics-Aware Hybrid Offline-and-Online Reinforcement Learning
Q&A on RocketChat immediately following Lightning Talks
[ Virtual ]
- [ 65125 ] Generalized Delayed Feedback Model with Post-Click Information in Recommender Systems
- [ 65126 ] Learning to Constrain Policy Optimization with Virtual Trust Region
- [ 65127 ] Improving Generative Adversarial Networks via Adversarial Learning in Latent Space
- [ 65128 ] Heatmap Distribution Matching for Human Pose Estimation
- [ 65129 ] Explainable Reinforcement Learning via Model Transforms
- [ 65131 ] Multi-agent Performative Prediction with Greedy Deployment and Consensus Seeking Agents
- [ 65132 ] Mingling Foresight with Imagination: Model-Based Cooperative Multi-Agent Reinforcement Learning
- [ 65133 ] Explain My Surprise: Learning Efficient Long-Term Memory by predicting uncertain outcomes
- [ 65135 ] MSDS: A Large-Scale Chinese Signature and Token Digit String Dataset for Handwriting Verification
Q&A on RocketChat immediately following Lightning Talks
[ Virtual ]
- [ 65158 ] E-MAPP: Efficient Multi-Agent Reinforcement Learning with Parallel Program Guidance
- [ 65159 ] Self-Organized Group for Cooperative Multi-agent Reinforcement Learning
- [ 65160 ] GALOIS: Boosting Deep Reinforcement Learning via Generalizable Logic Synthesis
- [ 65162 ] Multiagent Q-learning with Sub-Team Coordination
- [ 65163 ] Pre-Trained Image Encoder for Generalizable Visual Reinforcement Learning
- [ 65164 ] Multi-agent Dynamic Algorithm Configuration
- [ 65165 ] Iso-Dream: Isolating Noncontrollable Visual Dynamics in World Models
- [ 65166 ] Decoupling Knowledge from Memorization: Retrieval-augmented Prompt Learning
- [ 65167 ] Learning Active Camera for Multi-Object Navigation
Q&A on RocketChat immediately following Lightning Talks
[ Virtual ]
- [ 65169 ] Towards Versatile Embodied Navigation
- [ 65172 ] Equivariant Graph Hierarchy-Based Neural Networks
- [ 65173 ] Causality-driven Hierarchical Structure Discovery for Reinforcement Learning
- [ 65174 ] Multi-Lingual Acquisition on Multimodal Pre-training for Cross-modal Retrieval
- [ 65175 ] Obj2Seq: Formatting Objects as Sequences with Class Prompt for Visual Tasks
- [ 65176 ] SAPipe: Staleness-Aware Pipeline for Data Parallel DNN Training
- [ 65177 ] Learning Structure from the Ground up---Hierarchical Representation Learning by Chunking
- [ 65178 ] How and Why to Manipulate Your Own Agent: On the Incentives of Users of Learning Agents
- [ 65179 ] A Unified Evaluation of Textual Backdoor Learning: Frameworks and Benchmarks
Q&A on RocketChat immediately following Lightning Talks
Spotlight: Featured Papers Panels 5B Thu 8 Dec 11:00 a.m.
Each panel session is split into four 30-minute blocks composed of a set of lightning talks and a deep dive session related to similar topics. The deep dive will begin immediately after lightning talks and the related Q&A (it might be before the 15 min are over). We will not take any questions via microphone but ask you to use slido (see embedding below or go to https://slido.com and use keyword #neurips22). If you are a presenter or moderator, you should see a zoom link that you can use to join the session for Q&A.
Finally some important don'ts: DO NOT share any zoom or slido information publicly. DO NOT join zoom if you are not presenting or moderating.
- [ 65136 ] EpiGRAF: Rethinking training of 3D GANs
- [ 65137 ] CHIMLE: Conditional Hierarchical IMLE for Multimodal Conditional Image Synthesis
- [ 65138 ] HF-NeuS: Improved Surface Reconstruction Using High-Frequency Details
- [ 65139 ] Improving 3D-aware Image Synthesis with A Geometry-aware Discriminator
- [ 65140 ] Residual Multiplicative Filter Networks for Multiscale Reconstruction
- [ 65141 ] VideoMAE: Masked Autoencoders are Data-Efficient Learners for Self-Supervised Video Pre-Training
- [ 65142 ] LOT: Layer-wise Orthogonal Training on Improving l2 Certified Robustness
- [ 65143 ] Ensemble of Averages: Improving Model Selection and Boosting Performance in Domain Generalization
Q&A on RocketChat immediately following Lightning Talks
[ Virtual ]
- [ 65147 ] The Stability-Efficiency Dilemma: Investigating Sequence Length Warmup for Training GPT Models
- [ 65148 ] LiteTransformerSearch: Training-free Neural Architecture Search for Efficient Language Models
- [ 65149 ] Your Out-of-Distribution Detection Method is Not Robust!
- [ 65151 ] SAPA: Similarity-Aware Point Affiliation for Feature Upsampling
- [ 65152 ] Accelerating Certified Robustness Training via Knowledge Transfer
- [ 65155 ] Shape, Light, and Material Decomposition from Images using Monte Carlo Rendering and Denoising
Q&A on RocketChat immediately following Lightning Talks
[ Virtual ]
- [ 65180 ] Watermarking for Out-of-distribution Detection
- [ 65181 ] Learning Causally Invariant Representations for Out-of-Distribution Generalization on Graphs
- [ 65183 ] Self-supervised Amodal Video Object Segmentation
- [ 65184 ] Spectrum Random Masking for Generalization in Image-based Reinforcement Learning
- [ 65185 ] Learning Substructure Invariance for Out-of-Distribution Molecular Representations
- [ 65187 ] Stochastic Window Transformer for Image Restoration
- [ 65188 ] Rethinking and Improving Robustness of Convolutional Neural Networks: a Shapley Value-based Approach in Frequency Domain
- [ 65189 ] Learning Generalizable Models for Vehicle Routing Problems via Knowledge Distillation
- [ 65190 ] AnimeSR: Learning Real-World Super-Resolution Models for Animation Videos
Q&A on RocketChat immediately following Lightning Talks
[ Virtual ]
- [ 65193 ] Effective Backdoor Defense by Exploiting Sensitivity of Poisoned Samples
- [ 65195 ] Visual Concepts Tokenization
- [ 65196 ] DARE: Disentanglement-Augmented Rationale Extraction
- [ 65197 ] HyperMiner: Topic Taxonomy Mining with Hyperbolic Embedding
- [ 65198 ] ZIN: When and How to Learn Invariance Without Environment Partition?
- [ 65199 ] DeepInteraction: 3D Object Detection via Modality Interaction
- [ 65200 ] Discovering Design Concepts for CAD Sketches
Q&A on RocketChat immediately following Lightning Talks
Competition: VisDA 2022 Challenge: Sim2Real Domain Adaptation for Industrial Recycling Thu 8 Dec 03:00 p.m.
Efficient post-consumer waste recycling is one of the key challenges of modern society, as countries struggle to find sustainable solutions to rapidly rising waste levels and avoid increased soil and sea pollution. The US is one of the leading countries in waste generation by volume but recycles less than 35% of its recyclable waste. Recyclable waste is sorted according to material type (paper, plastic, etc.) in material recovery facilities (MRFs) which still heavily rely on manual sorting. Computer vision solutions are an essential component in automating waste sorting and ultimately solving the pollution problem.In this sixth iteration of the VisDA challenge, we introduce a simulation-to-real (Sim2Real) semantic image segmentation competition for industrial waste sorting. We aim to answer the question: can synthetic data augmentation improve performance on this task and help adapt to changing data distributions? Label-efficient and reliable semantic segmentation is essential for this setting, but differs significantly from existing semantic segmentation datasets: waste objects are typically severely deformed and randomly located, which limits the efficacy of both shape and context priors, and have long tailed distributions and high clutter. Synthetic data augmentation can benefit such applications due to the difficulty in obtaining labels and rare categories. However, new solutions are needed to overcome the large domain gap between simulated and real images. Natural domain shift due to factors such as MRF location, season, machinery in use, etc., also needs to be handled in this application.Competitors will have access to two sources of training data: a novel procedurally generated synthetic waste sorting dataset, SynthWaste, as well as fully-annotated waste sorting data collected from a real material recovery facility. The target test set will be real data from a different MRF.
Competition: The Third Neural MMO Challenge: Learning to Specialize in Massively Multiagent Open Worlds Thu 8 Dec 03:00 p.m.
Neural MMO is an open-source environment for agent-based intelligence research featuring large maps with large populations, long time horizons, and open-ended multi-task objectives. We propose a benchmark on this platform wherein participants train and submit agents to accomplish loosely specified goals -- both as individuals and as part of a team. The submitted agents are evaluated against thousands of other such user submitted agents. Participants get started with a publicly available code base for Neural MMO, scripted and learned baseline models, and training/evaluation/visualization packages. Our objective is to foster the design and implementation of algorithms and methods for adapting modern agent-based learning methods (particularly reinforcement learning) to a more general setting not limited to few agents, narrowly defined tasks, or short time horizons. Neural MMO provides a convenient setting for exploring these ideas without the computational inefficiency typically associated with larger environments.
Competition: OGB-LSC 2022: A Large-Scale Challenge for ML on Graphs Thu 8 Dec 03:00 p.m.
Enabling effective and efficient machine learning (ML) over large-scale graph data (e.g., graphs with billions of edges) can have a huge impact on both industrial and scientific applications. At KDD Cup 2021, we organized the OGB Large-Scale Challenge (OGB-LSC), where we provided large and realistic graph ML tasks. Our KDD Cup attracted huge attention from graph ML community (more than 500 team registrations across the globe), facilitating innovative methods being developed to yield significant performance breakthrough. However, the problem of machine learning over large graphs is not solved yet and it is important for the community to engage in a focused multi-year effort in this area (like ImageNet and MS-COCO). Here we propose an annual ML challenge around large-scale graph datasets, which will drive forward method development and allow for tracking progress. We propose the 2nd OGB-LSC (referred to as OGB-LSC 2022) around the OGB-LSC datasets. Our proposed challenge consists of three tracks, covering core graph ML tasks of node-level prediction (academic paper classification with 240 million nodes), link-level prediction (knowledge graph completion with 90 million entities), and graph-level prediction (molecular property prediction with 4 million graphs). Importantly, we have updated two out of the three datasets based on the lessons learned from our KDD Cup, so that the resulting datasets are more challenging and realistic. Our datasets are extensively validated through our baseline analyses and last year’s KDD Cup. We also provide the baseline code as well as Python package to easily load the datasets and evaluate the model performance.
Competition: Open Catalyst Challenge Thu 8 Dec 03:00 p.m.
Advancements to renewable energy processes are needed urgently to address climate change and energy scarcity around the world. Many of these processes, including the generation of electricity through fuel cells or fuel generation from renewable resources are driven through chemical reactions. The use of catalysts in these chemical reactions plays a key role in developing cost-effective solutions by enabling new reactions and improving their efficiency. Unfortunately, the discovery of new catalyst materials is limited due to the high cost of computational atomic simulations and experimental studies. Machine learning has the potential to significantly reduce the cost of computational simulations by orders of magnitude. By filtering potential catalyst materials based on these simulations, candidates of higher promise may be selected for experimental testing and the rate at which new catalysts are discovered could be greatly accelerated.The 2nd edition of the Open Catalyst Challenge invites participants to submit results of machine learning models that simulate the interaction of a molecule on a catalyst's surface. Specifically, the task is to predict the energy of an adsorbate-catalyst system in its relaxed state starting from an arbitrary initial state. From these values, the catalyst's impact on the overall rate of a chemical reaction may be estimated; a key factor in filtering potential catalysis materials. Competition participants are provided training and validation datasets containing over 6 million data samples from a wide variety of catalyst materials, and a new testing dataset specific to the competition. Results will be evaluated and winners determined by comparing against the computationally expensive approach of Density Functional Theory to verify the relaxed energies predicted. Baseline models and helper code are available on Github: https://github.com/open-catalyst-project/ocp.
Competition: Habitat Rearrangement Challenge Thu 8 Dec 03:00 p.m.
We propose the Habitat Rearrangement Challenge. Specifically, a virtual robot (Fetch mobile manipulator) is spawned in a previously unseen simulation environment and asked to rearrange objects from initial to desired positions -- picking/placing objects from receptacles (counter, sink, sofa, table), opening/closing containers (drawers, fridges) as necessary. The robot operates entirely from onboard sensing -- head- and arm-mounted RGB-D cameras, proprioceptive joint-position sensors (for the arm), and egomotion sensors (for the mobile base) -- and may not access any privileged state information (no prebuilt maps, no 3D models of rooms or objects, no physically-implausible sensors providing knowledge of mass, friction, articulation of containers). This is a challenging embodied AI task involving embodied perception, mobile manipulation, sequential decision making in long-horizon tasks, and (potentially) deep reinforcement and imitation learning. Developing such embodied intelligent systems is a goal of deep scientific and societal value, including practical applications in home assistant robots.
Competition: The Trojan Detection Challenge Thu 8 Dec 03:00 p.m.
A growing concern for the security of ML systems is the possibility for Trojan attacks on neural networks. There is now considerable literature for methods detecting these attacks. We propose the Trojan Detection Challenge to further the community's understanding of methods to construct and detect Trojans. This competition will consist of complimentary tracks on detecting/analyzing Trojans and creating evasive Trojans. Participants will be tasked with devising methods to better detect Trojans using a new dataset containing over 6,000 neural networks. Code and evaluations from three established baseline detectors will provide a starting point, and a novel Minimal Trojan attack will challenge participants to push the state-of-the-art in Trojan detection. At the end of the day, we hope our competition spurs practical innovations and clarifies deep questions surrounding the offense-defense balance of Trojan attacks.
Spotlight: Featured Papers Panels 6A Thu 8 Dec 07:00 p.m.
Each panel session is split into four 30-minute blocks composed of a set of lightning talks and a deep dive session related to similar topics. The deep dive will begin immediately after lightning talks and the related Q&A (it might be before the 15 min are over). We will not take any questions via microphone but ask you to use slido (see embedding below or go to https://slido.com and use keyword #neurips22). If you are a presenter or moderator, you should see a zoom link that you can use to join the session for Q&A.
Finally some important don'ts: DO NOT share any zoom or slido information publicly. DO NOT join zoom if you are not presenting or moderating.
- [ 65202 ] DropCov: A Simple yet Effective Method for Improving Deep Architectures
- [ 65203 ] A Win-win Deal: Towards Sparse and Robust Pre-trained Language Models
- [ 65204 ] Decoupling Features in Hierarchical Propagation for Video Object Segmentation
- [ 65205 ] BMU-MoCo: Bidirectional Momentum Update for Continual Video-Language Modeling
- [ 65207 ] Scaling & Shifting Your Features: A New Baseline for Efficient Model Tuning
- [ 65209 ] P2P: Tuning Pre-trained Image Models for Point Cloud Analysis with Point-to-Pixel Prompting
- [ 65210 ] Deep Attentive Belief Propagation: Integrating Reasoning and Learning for Solving Constraint Optimization Problems
- [ 65211 ] Revisiting Graph Contrastive Learning from the Perspective of Graph Spectrum
- [ 65212 ] Self-supervised Heterogeneous Graph Pre-training Based on Structural Clustering
Q&A on RocketChat immediately following Lightning Talks
[ Virtual ]
- [ 65213 ] How Mask Matters: Towards Theoretical Understandings of Masked Autoencoders
- [ 65214 ] HumanLiker: A Human-like Object Detector to Model the Manual Labeling Process
- [ 65215 ] LGDN: Language-Guided Denoising Network for Video-Language Modeling
- [ 65217 ] When Adversarial Training Meets Vision Transformers: Recipes from Training to Architecture
- [ 65218 ] Museformer: Transformer with Fine- and Coarse-Grained Attention for Music Generation
- [ 65219 ] Neural-Symbolic Entangled Framework for Complex Query Answering
- [ 65220 ] DTG-SSOD: Dense Teacher Guidance for Semi-Supervised Object Detection
- [ 65221 ] A Unified Hard-Constraint Framework for Solving Geometrically Complex PDEs
- [ 65222 ] Differentially Private Learning with Margin Guarantees
- [ 65223 ] DDXPlus: A New Dataset For Automatic Medical Diagnosis
Q&A on RocketChat immediately following Lightning Talks
[ Virtual ]
- [ 65246 ] GenSDF: Two-Stage Learning of Generalizable Signed Distance Functions
- [ 65247 ] PatchComplete: Learning Multi-Resolution Patch Priors for 3D Shape Completion on Unseen Categories
- [ 65248 ] AutoLink: Self-supervised Learning of Human Skeletons and Object Outlines by Linking Keypoints
- [ 65249 ] Efficient and Effective Augmentation Strategy for Adversarial Training
- [ 65250 ] 4D Unsupervised Object Discovery
- [ 65251 ] SNAKE: Shape-aware Neural 3D Keypoint Field
- [ 65252 ] Few-Shot Non-Parametric Learning with Deep Latent Variable Model
- [ 65253 ] Neural Shape Deformation Priors
- [ 65254 ] Few-Shot Continual Active Learning by a Robot
- [ 65255 ] Escaping Saddle Points for Effective Generalization on Class-Imbalanced Data
- [ 65256 ] Segmenting Moving Objects via an Object-Centric Layered Representation
Q&A on RocketChat immediately following Lightning Talks
[ Virtual ]
- [ 65257 ] Relational Proxies: Emergent Relationships as Fine-Grained Discriminators
- [ 65258 ] MetaTeacher: Coordinating Multi-Model Domain Adaptation for Medical Image Classification
- [ 65259 ] Two-Stream Network for Sign Language Recognition and Translation
- [ 65260 ] Adversarial Training with Complementary Labels: On the Benefit of Gradually Informative Attacks
- [ 65261 ] Egocentric Video-Language Pretraining
- [ 65262 ] Multi-dataset Training of Transformers for Robust Action Recognition
- [ 65264 ] Manifold Interpolating Optimal-Transport Flows for Trajectory Inference
- [ 65265 ] An Embarrassingly Simple Approach to Semi-Supervised Few-Shot Learning
- [ 65266 ] MultiScan: Scalable RGBD scanning for 3D environments with articulated objects
- [ 65267 ] A Greek Parliament Proceedings Dataset for Computational Linguistics and Political Analysis
Q&A on RocketChat immediately following Lightning Talks
Spotlight: Featured Papers Panels 6B Thu 8 Dec 07:00 p.m.
Each panel session is split into four 30-minute blocks composed of a set of lightning talks and a deep dive session related to similar topics. The deep dive will begin immediately after lightning talks and the related Q&A (it might be before the 15 min are over). We will not take any questions via microphone but ask you to use slido (see embedding below or go to https://slido.com and use keyword #neurips22). If you are a presenter or moderator, you should see a zoom link that you can use to join the session for Q&A.
Finally some important don'ts: DO NOT share any zoom or slido information publicly. DO NOT join zoom if you are not presenting or moderating.
- [ 65224 ] A Spectral Approach to Item Response Theory
- [ 65225 ] Stability Analysis and Generalization Bounds of Adversarial Training
- [ 65226 ] Multi-block-Single-probe Variance Reduced Estimator for Coupled Compositional Optimization
- [ 65228 ] Adam Can Converge Without Any Modification On Update Rules
- [ 65229 ] Poisson Flow Generative Models
- [ 65231 ] Contextual Bandits with Knapsacks for a Conversion Model
- [ 65232 ] Robust Graph Structure Learning over Images via Multiple Statistical Tests
Q&A on RocketChat immediately following Lightning Talks
[ Virtual ]
- [ 65238 ] Peer Prediction for Learning Agents
- [ 65239 ] MultiGuard: Provably Robust Multi-label Classification against Adversarial Examples
- [ 65240 ] Structural Pruning via Latency-Saliency Knapsack
- [ 65241 ] On the Strong Correlation Between Model Invariance and Generalization
- [ 65243 ] Zero-Sum Stochastic Stackelberg Games
- [ 65245 ] Kantorovich Strikes Back! Wasserstein GANs are not Optimal Transport?
Q&A on RocketChat immediately following Lightning Talks
[ Virtual ]
- [ 65268 ] EcoFormer: Energy-Saving Attention with Linear Complexity
- [ 65269 ] Fast Vision Transformers with HiLo Attention
- [ 65270 ] VTC-LFC: Vision Transformer Compression with Low-Frequency Components
- [ 65271 ] SAViT: Structure-Aware Vision Transformer Pruning via Collaborative Optimization
- [ 65272 ] Outlier Suppression: Pushing the Limit of Low-bit Transformer Language Models
- [ 65274 ] RecursiveMix: Mixed Learning with History
- [ 65275 ] Expediting Large-Scale Vision Transformer for Dense Prediction without Fine-tuning
- [ 65276 ] Expectation-Maximization Contrastive Learning for Compact Video-and-Language Representations
- [ 65277 ] MaskPlace: Fast Chip Placement via Reinforced Visual Representation Learning
- [ 65278 ] Feature-Proxy Transformer for Few-Shot Segmentation
Q&A on RocketChat immediately following Lightning Talks
[ Virtual ]
- [ 65279 ] Quantized Training of Gradient Boosting Decision Trees
- [ 65280 ] Coded Residual Transform for Generalizable Deep Metric Learning
- [ 65282 ] Pyramid Attention For Source Code Summarization
- [ 65283 ] MorphTE: Injecting Morphology in Tensorized Embeddings
- [ 65284 ] Expansion and Shrinkage of Localization for Weakly-Supervised Semantic Segmentation
- [ 65285 ] Out-of-Distribution Detection via Conditional Kernel Independence Model
- [ 65286 ] Weak-shot Semantic Segmentation via Dual Similarity Transfer
- [ 65287 ] MoVQ: Modulating Quantized Vectors for High-Fidelity Image Generation
- [ 65288 ] Falconn++: A Locality-sensitive Filtering Approach for Approximate Nearest Neighbor Search
- [ 65289 ] ViSioNS: Visual Search in Natural Scenes Benchmark
Q&A on RocketChat immediately following Lightning Talks