Weather4cast - Super-Resolution Rain Movie Prediction under Spatio-temporal Shifts
The Weather4cast NeurIPS Competition has high practical impact for society: Unusual weather is increasing all over the world, reflecting ongoing climate change, and affecting communities in agriculture, transport, public health and safety, etc.Can you predict future rain patterns with modern machine learning algorithms? Apply spatio-temporal modelling to complex dynamic systems. Get access to unique large-scale data and demonstrate temporal and spatial transfer learning under strong distributional shifts.We provide a super-resolution challenge of high relevance to local events: Predict future weather as measured by ground-based hi-res rain radar weather stations.In addition to movies comprising rain radar maps you get large-scale multi-band satellite sensor images for exploiting data fusion.Winning models will advance key areas of methods research in machine learning, of relevance beyond the application domain.
How to negotiate industry offers
Join the team at Rora and 81cents, to get the tools, information, and data you need to negotiate your next offer in AI more confidently.
Some of the topics we'll cover in a 1.5 hr. period (with 1/2 an hour for Q&A) are:
- Understanding the fundamentals of compensation in tech (particularly around equity, bonus structures, etc.)
- How to get over your fears of negotiating
- How to decide which company / offer is right for you
- How to negotiate without counter offers and without knowing ""market value""
- How to respond to pushback from recruiters and other guilt tripping / lowballing /pressure tactics
- How to avoid having an offer rescinded
- How to negotiate deadline of an offer
- Walking through a timeline of the negotiation process for a new offer
Each panel session is split into four 30-minute blocks composed of a set of lightning talks and a deep dive session related to similar topics. The deep dive will begin immediately after lightning talks and the related Q&A (it might be before the 15 min are over). We will not take any questions via microphone but ask you to use slido (see embedding below or go to https://slido.com and use keyword #neurips22). If you are a presenter or moderator, you should see a zoom link that you can use to join the session for Q&A.
Finally some important don'ts: DO NOT share any zoom or slido information publicly. DO NOT join zoom if you are not presenting or moderating.
Each panel session is split into four 30-minute blocks composed of a set of lightning talks and a deep dive session related to similar topics. The deep dive will begin immediately after lightning talks and the related Q&A (it might be before the 15 min are over). We will not take any questions via microphone but ask you to use slido (see embedding below or go to https://slido.com and use keyword #neurips22). If you are a presenter or moderator, you should see a zoom link that you can use to join the session for Q&A.
Finally some important don'ts: DO NOT share any zoom or slido information publicly. DO NOT join zoom if you are not presenting or moderating.
VisDA 2022 Challenge: Sim2Real Domain Adaptation for Industrial Recycling
Efficient post-consumer waste recycling is one of the key challenges of modern society, as countries struggle to find sustainable solutions to rapidly rising waste levels and avoid increased soil and sea pollution. The US is one of the leading countries in waste generation by volume but recycles less than 35% of its recyclable waste. Recyclable waste is sorted according to material type (paper, plastic, etc.) in material recovery facilities (MRFs) which still heavily rely on manual sorting. Computer vision solutions are an essential component in automating waste sorting and ultimately solving the pollution problem.In this sixth iteration of the VisDA challenge, we introduce a simulation-to-real (Sim2Real) semantic image segmentation competition for industrial waste sorting. We aim to answer the question: can synthetic data augmentation improve performance on this task and help adapt to changing data distributions? Label-efficient and reliable semantic segmentation is essential for this setting, but differs significantly from existing semantic segmentation datasets: waste objects are typically severely deformed and randomly located, which limits the efficacy of both shape and context priors, and have long tailed distributions and high clutter. Synthetic data augmentation can benefit such applications due to the difficulty in obtaining labels and rare categories. However, new solutions are needed to overcome the large domain gap between simulated and real images. Natural domain shift due to factors such as MRF location, season, machinery in use, etc., also needs to be handled in this application.Competitors will have access to two sources of training data: a novel procedurally generated synthetic waste sorting dataset, SynthWaste, as well as fully-annotated waste sorting data collected from a real material recovery facility. The target test set will be real data from a different MRF.
The Third Neural MMO Challenge: Learning to Specialize in Massively Multiagent Open Worlds
Neural MMO is an open-source environment for agent-based intelligence research featuring large maps with large populations, long time horizons, and open-ended multi-task objectives. We propose a benchmark on this platform wherein participants train and submit agents to accomplish loosely specified goals -- both as individuals and as part of a team. The submitted agents are evaluated against thousands of other such user submitted agents. Participants get started with a publicly available code base for Neural MMO, scripted and learned baseline models, and training/evaluation/visualization packages. Our objective is to foster the design and implementation of algorithms and methods for adapting modern agent-based learning methods (particularly reinforcement learning) to a more general setting not limited to few agents, narrowly defined tasks, or short time horizons. Neural MMO provides a convenient setting for exploring these ideas without the computational inefficiency typically associated with larger environments.
OGB-LSC 2022: A Large-Scale Challenge for ML on Graphs
Enabling effective and efficient machine learning (ML) over large-scale graph data (e.g., graphs with billions of edges) can have a huge impact on both industrial and scientific applications. At KDD Cup 2021, we organized the OGB Large-Scale Challenge (OGB-LSC), where we provided large and realistic graph ML tasks. Our KDD Cup attracted huge attention from graph ML community (more than 500 team registrations across the globe), facilitating innovative methods being developed to yield significant performance breakthrough. However, the problem of machine learning over large graphs is not solved yet and it is important for the community to engage in a focused multi-year effort in this area (like ImageNet and MS-COCO). Here we propose an annual ML challenge around large-scale graph datasets, which will drive forward method development and allow for tracking progress. We propose the 2nd OGB-LSC (referred to as OGB-LSC 2022) around the OGB-LSC datasets. Our proposed challenge consists of three tracks, covering core graph ML tasks of node-level prediction (academic paper classification with 240 million nodes), link-level prediction (knowledge graph completion with 90 million entities), and graph-level prediction (molecular property prediction with 4 million graphs). Importantly, we have updated two out of the three datasets based on the lessons learned from our KDD Cup, so that the resulting datasets are more challenging and realistic. Our datasets are extensively validated through our baseline analyses and last year’s KDD Cup. We also provide the baseline code as well as Python package to easily load the datasets and evaluate the model performance.
Open Catalyst Challenge
Advancements to renewable energy processes are needed urgently to address climate change and energy scarcity around the world. Many of these processes, including the generation of electricity through fuel cells or fuel generation from renewable resources are driven through chemical reactions. The use of catalysts in these chemical reactions plays a key role in developing cost-effective solutions by enabling new reactions and improving their efficiency. Unfortunately, the discovery of new catalyst materials is limited due to the high cost of computational atomic simulations and experimental studies. Machine learning has the potential to significantly reduce the cost of computational simulations by orders of magnitude. By filtering potential catalyst materials based on these simulations, candidates of higher promise may be selected for experimental testing and the rate at which new catalysts are discovered could be greatly accelerated.The 2nd edition of the Open Catalyst Challenge invites participants to submit results of machine learning models that simulate the interaction of a molecule on a catalyst's surface. Specifically, the task is to predict the energy of an adsorbate-catalyst system in its relaxed state starting from an arbitrary initial state. From these values, the catalyst's impact on the overall rate of a chemical reaction may be estimated; a key factor in filtering potential catalysis materials. Competition participants are provided training and validation datasets containing over 6 million data samples from a wide variety of catalyst materials, and a new testing dataset specific to the competition. Results will be evaluated and winners determined by comparing against the computationally expensive approach of Density Functional Theory to verify the relaxed energies predicted. Baseline models and helper code are available on Github: https://github.com/open-catalyst-project/ocp.
Habitat Rearrangement Challenge
We propose the Habitat Rearrangement Challenge. Specifically, a virtual robot (Fetch mobile manipulator) is spawned in a previously unseen simulation environment and asked to rearrange objects from initial to desired positions -- picking/placing objects from receptacles (counter, sink, sofa, table), opening/closing containers (drawers, fridges) as necessary. The robot operates entirely from onboard sensing -- head- and arm-mounted RGB-D cameras, proprioceptive joint-position sensors (for the arm), and egomotion sensors (for the mobile base) -- and may not access any privileged state information (no prebuilt maps, no 3D models of rooms or objects, no physically-implausible sensors providing knowledge of mass, friction, articulation of containers). This is a challenging embodied AI task involving embodied perception, mobile manipulation, sequential decision making in long-horizon tasks, and (potentially) deep reinforcement and imitation learning. Developing such embodied intelligent systems is a goal of deep scientific and societal value, including practical applications in home assistant robots.
The Trojan Detection Challenge
A growing concern for the security of ML systems is the possibility for Trojan attacks on neural networks. There is now considerable literature for methods detecting these attacks. We propose the Trojan Detection Challenge to further the community's understanding of methods to construct and detect Trojans. This competition will consist of complimentary tracks on detecting/analyzing Trojans and creating evasive Trojans. Participants will be tasked with devising methods to better detect Trojans using a new dataset containing over 6,000 neural networks. Code and evaluations from three established baseline detectors will provide a starting point, and a novel Minimal Trojan attack will challenge participants to push the state-of-the-art in Trojan detection. At the end of the day, we hope our competition spurs practical innovations and clarifies deep questions surrounding the offense-defense balance of Trojan attacks.
Each panel session is split into four 30-minute blocks composed of a set of lightning talks and a deep dive session related to similar topics. The deep dive will begin immediately after lightning talks and the related Q&A (it might be before the 15 min are over). We will not take any questions via microphone but ask you to use slido (see embedding below or go to https://slido.com and use keyword #neurips22). If you are a presenter or moderator, you should see a zoom link that you can use to join the session for Q&A.
Finally some important don'ts: DO NOT share any zoom or slido information publicly. DO NOT join zoom if you are not presenting or moderating.
Each panel session is split into four 30-minute blocks composed of a set of lightning talks and a deep dive session related to similar topics. The deep dive will begin immediately after lightning talks and the related Q&A (it might be before the 15 min are over). We will not take any questions via microphone but ask you to use slido (see embedding below or go to https://slido.com and use keyword #neurips22). If you are a presenter or moderator, you should see a zoom link that you can use to join the session for Q&A.
Finally some important don'ts: DO NOT share any zoom or slido information publicly. DO NOT join zoom if you are not presenting or moderating.