Skip to yearly menu bar Skip to main content


Timezone: America/Chicago

Competition: AutoML Decathlon: Diverse Tasks, Modern Methods, and Efficiency at Scale Wed 7 Dec 07:00 a.m.  

Samuel Guo · Cong Xu · Nicholas Roberts · Misha Khodak · Junhong Shen · Evan Sparks · Ameet Talwalkar · Yuriy Nevmyvaka · Frederic Sala · Anderson Schneider

As more areas beyond the traditional AI domains (e.g., computer vision and natural language processing) seek to take advantage of data-driven tools, the need for developing ML systems that can adapt to a wide range of downstream tasks in an efficient and automatic way continues to grow. The AutoML for the 2020s competition aims to catalyze research in this area and establish a benchmark for the current state of automated machine learning. Unlike previous challenges which focus on a single class of methods such as non-deep-learning AutoML, hyperparameter optimization, or meta-learning, this competition proposes to (1) evaluate automation on a diverse set of small and large-scale tasks, and (2) allow the incorporation of the latest methods such as neural architecture search and unsupervised pretraining. To this end, we curate 20 datasets that represent a broad spectrum of practical applications in scientific, technological, and industrial domains. Participants are given a set of 10 development tasks selected from these datasets and are required to come up with automated programs that perform well on as many problems as possible and generalize to the remaining private test tasks. To ensure efficiency, the evaluation will be conducted under a fixed computational budget. To ensure robustness, the performance profiles methodology is used for determining the winners. The organizers will provide computational resources to the participants as needed and monetary prizes to the winners.


Competition: Multimodal Single-Cell Integration Across Time, Individuals, and Batches Wed 7 Dec 07:00 a.m.  

Daniel Burkhardt · Jonathan Bloom · Robrecht Cannoodt · Malte Luecken · Smita Krishnaswamy · Christopher Lance · Angela Pisco · Fabian Theis

In this workshop, we will hear presentations from winners and competitors in the Multimodal Single-Cell Integration Challenge. For more information about the competition, see our competition page: https://www.kaggle.com/competitions/open-problems-multimodal/


Schedule - all times UTC
Presentation Name Start Stop Team Name Presenter Name
CCompetition Overview 13:00 13:10 Hosts Daniel Burkhardt
First place winner 13:10 13:30 Shuji Suzuki Shuji Suzuki
Third place winner 13:30 13:50 Makotu makoto hyodo
Fifth place 13:50 14:00 Lucky Shake Jeroen Cerpentier
Second place winner 14:00 14:20 senkin & tmp Jin Zhan
Fourth place 14:20 14:30 Oliver Wang Guoxuan Wang
Seventh place 14:30 14:40 chromosom Yury Shapchyts
Eighth place 14:40 14:50 vialactea Fernando Goncalves
Hosts choice 14:50 15:00 Kha | MT | B | Ambros Ambros Marzetta
Top Shake-up 15:00 15:15 One&Only Tianyu Liu
Top Shake-up 15:15 15:30 DANCE Hongzhi Wen
Hosts choice 15:30 15:45 sB2 Alexander Chervov
Wrap Up 15:45 15:50 Hosts Daniel Burkhardt


Competition: Reconnaissance Blind Chess: An Unsolved Challenge for Multi-Agent Decision Making Under Uncertainty Wed 7 Dec 07:00 a.m.  

Ryan Gardner · Gino Perrotta · Corey Lowman · Casey Richardson · Andrew Newman · Jared Markowitz · Nathan Drenkow · Bart Paulhamus · Ashley J Llorens · Todd Neller · Raman Arora · Bo Li · Mykel J Kochenderfer

Reconnaissance Blind Chess (RBC) is like chess except a player cannot see her opponent's pieces in general. Rather, each player chooses a 3x3 square of the board to privately observe each turn. State-of-the-art algorithms, including those used to create agents for previous games like chess, Go, and poker, break down in Reconnaissance Blind Chess for several reasons including the imperfect information, absence of obvious abstractions, and lack of common knowledge. Build the best bot for this challenge in making strong decisions in competitive multi-agent scenarios in the face of uncertainty!


Competition: The CityLearn Challenge 2022 Wed 7 Dec 07:00 a.m.  

Zoltan Nagy · Kingsley Nweye · Sharada Mohanty · Siva Sankaranarayanan · Jan Drgona · Tianzhen Hong · Sourav Dey · Gregor Henze

Reinforcement learning has gained popularity as a model-free and adaptive controller for the built-environment in demand-response applications. However, a lack of standardization on previous research has made it difficult to compare different RL algorithms with each other. Also, it is unclear how much effort is required in solving each specific problem in the building domain and how well a trained RL agent will scale up to new environments. The CityLearn Challenge 2022 provides an avenue to address these problems by leveraging CityLearn, an OpenAI Gym Environment for the implementation of RL agents for demand response. The challenge utilizes operational electricity demand data to develop an equivalent digital twin model of the 20 buildings. Participants are to develop energy management agents for battery charge and discharge control in each building with a goal of minimizing electricity demand from the grid, electricity bill and greenhouse gas emissions. We provide a baseline RBC agent for the evaluation of the RL agents performance and rank the participants' according to their solution's ability to outperform the baseline.


Second AmericasNLP Competition: Speech-to-Text Translation for Indigenous Languages of the Americas Wed 7 Dec 07:00 a.m.  

Manuel Mager · Katharina Kann · Abteen Ebrahimi · Félix Arturo Oncevay Marcos · Rodolfo Joel Zevallos Salazar · Adam Wiemerslage · Pavel Denisov · John E. Ortega · Kristine Stenzel · Aldo Alvarez · Luis Chiruzzo · Rolando Coto-Solano · Hilaria Cruz · Sofía Flores-Solórzano · Ivan Vladimir Meza Ruiz · Alexis Palmer · Thang Vu

AmericasNLP aims to encourage and increase the visibility of research on machine learning approaches for Indigenous languages of the Americas, as, until recently, those have often been overlooked by researchers. For the Second AmericasNLP Competition: Speech-to-Text Translation for Indigenous Languages of the Americas we ask participants to develop or contribute to the development of speech-to-text translation systems for five Indigenous languages of the Americas (Bribri, Guaraní, Kotiria, Quechua and Wa’ikhana), for which available resources are extremely limited. The main task of this competition is speech-to-text translation, and we additionally invite submissions to its two subtasks: automatic speech recognition and text-to-text machine translation.


EURO Meets NeurIPS 2022 Vehicle Routing Competition Wed 7 Dec 07:00 a.m.  

Wouter Kool · Laurens Bliek · Yingqian Zhang · Kevin Tierney · Eduardo Uchoa · Thibaut Vidal · Joaquim Gromicho

Solving vehicle routing problems (VRPs) is an essential task for many industrial applications. While VRPs have been traditionally studied in the operations research (OR) domain, they have lately been the subject of extensive work in the machine learning (ML) community. Both the OR and ML communities have begun to integrate ML into their methods, but in vastly different ways. While the OR community mostly relies on simplistic ML methods, the ML community generally uses deep learning, but fails to outperform OR baselines. To address this gap, this competition, a joint effort of several previous competitions, brings together the OR and ML communities to solve a challenging VRP variant on real-world data provided by ORTEC, a leading provider of vehicle routing software. The challenge focuses on both a `classic' deterministic VRP with time windows (VRPTW) and a dynamic version in which new orders arrive over the course of a day. As a baseline, we will provide a state-of-the-art VRPTW solver and a simple strategy to use it to solve the dynamic variant, thus ensuring that all competition participants have the tools necessary to solve both versions of the problem. We anticipate that the winning method will significantly advance the state-of-the-art for solving routing problems, therefore providing a strong foundation for further research in both the OR and ML communities, as well as a practical impact on the real-world solving of VRPs.


Competition: Weakly Supervised Cell Segmentation in Multi-modality High-Resolution Microscopy Images Wed 7 Dec 07:00 a.m.  

JUN MA · Ronald Xie · Shamini Ayyadhury · Sweta Banerjee · Ritu Gupta · Gary Bader · Bo Wang

Cell segmentation is usually the first step for downstream single-cell analysis in microscopy image-based biology and biomedical research. Deep learning has been widely used for image segmentation, but it is hard to collect a large number of labelled cell images to train models because manually annotating cells is extremely time-consuming and costly. Furthermore, datasets used are often limited to one modality and lacking in diversity, leading to poor generalization of trained models. This competition aims to benchmark cell segmentation methods that could be applied to various microscopy images across multiple imaging platforms and tissue types. We frame the cell segmentation problem as a weakly supervised learning task to encourage models that use limited labelled and many unlabelled images for cell segmentation as unlabelled images are relatively easy to obtain in practice. We will implement a U-Net model as a baseline owing to their established success in biomedical image segmentation. This competition could serve as an important step toward universal and fully automatic cell image analysis tools, greatly accelerating the rate of discovery from image-based biological and biomedical research.


Competition: Inferring Physical Properties of Exoplanets From Next-Generation Telescopes Wed 7 Dec 07:05 a.m.  

Kai Hou Yip · Ingo Waldmann · Quentin Changeat · Nikos Nikolaou · Mario Morvan · Ahmed Al-Refaie · Billy Edwards · Angelos Tsiaras · Catarina Alves de Oliveira · James Cho · Pierre-Olivier Lagage · Clare Jenner · Jeyan Thiyagalingam · Giovanna Tinetti

The study of extra-solar planets, or simply, exoplanets, planets outside our own Solar System, is fundamentally a grand quest to understand our place in the Universe. Discoveries in the last two decades have re-defined what we know about planets, and helped us comprehend the uniqueness of our very own Earth. In recent years, however, the focus has shifted from planet detection to planet characterisation, where key planetary properties are inferred from telescope observations using Monte Carlo-based methods. However, the efficiency of sampling-based methodologies is put under strain by the high-resolution observational data from next generation telescopes, such as the James Webb Space Telescope and the Ariel Space Mission. We propose to host a regular competition with the goal of identifying a reliable and scalable method to perform planetary characterisation. Depending on the chosen track, participants will provide either quartile estimates or the approximate distribution of key planetary properties. They will have access to synthetic spectroscopic data generated from the official simulators for the ESA Ariel Space Mission. The aims of the competition are three-fold. 1) To offer a challenging application for comparing and advancing conditional density estimation methods. 2) To provide a valuable contribution towards reliable and efficient analysis of spectroscopic data, enabling astronomers to build a better picture of planetary demographics, and 3) To promote the interaction between ML and exoplanetary science.


Spotlight: Featured Papers Panels 3A Wed 7 Dec 11:00 a.m.  

Each panel session is split into four 30-minute blocks composed of a set of lightning talks and a deep dive session related to similar topics. The deep dive will begin immediately after lightning talks and the related Q&A (it might be before the 15 min are over). We will not take any questions via microphone but ask you to use slido (see embedding below or go to https://slido.com and use keyword #neurips22). If you are a presenter or moderator, you should see a zoom link that you can use to join the session for Q&A.

Finally some important don'ts: DO NOT share any zoom or slido information publicly. DO NOT join zoom if you are not presenting or moderating.

Lightning Talk
Shu Ding · Wanxing Chang · Jiyang Guan · Mouxiang Chen · Guan Gui · Yue Tan · Shiyun Lin · Guodong Long · Yuze Han · Wei Wang · Zhen Zhao · Ye Shi · Jian Liang · Chenghao Liu · Lei Qi · Ran He · Jie Ma · Zemin Liu · Xiang Li · Hoang Tuan · Luping Zhou · Zhihua Zhang · Jianling Sun · Jingya Wang · LU LIU · Tianyi Zhou · Lei Wang · Jing Jiang · Yinghuan Shi
Lightning Talk
shuwen yang · Xu Zhang · Delvin Ce Zhang · Lan-Zhe Guo · Renzhe Xu · Zhuoer Xu · Yao-Xiang Ding · Weihan Li · Xingxuan Zhang · Xi-Zhu Wu · Zhenyuan Yuan · Hady Lauw · Yu Qi · Yi-Ge Zhang · Zhihao Yang · Guanghui Zhu · Dong Li · Changhua Meng · Kun Zhou · Gang Pan · Zhi-Fan Wu · Bo Li · Minghui Zhu · Zhi-Hua Zhou · Yafeng Zhang · Yingxueff Zhang · shiwen cui · Jie-Jing Shao · Zhanguang Zhang · Zhenzhe Ying · Xiaolong Chen · Yu-Feng Li · Guojie Song · Peng Cui · Weiqiang Wang · Ming GU · Jianye Hao · Yihua Huang
Lightning Talk
Xu Yan · Zheng Dong · Qiancheng Fu · Jing Tan · Hezhen Hu · Fukun Yin · Weilun Wang · Ke Xu · Heshen Zhan · Wen Liu · Qingshan Xu · Xiaotong Zhao · Chaoda Zheng · Ziheng Duan · Zilong Huang · Xintian Shi · Wengang Zhou · Yew Soon Ong · Pei Cheng · Hujun Bao · Houqiang Li · Wenbing Tao · Jiantao Gao · Bin Kang · Weiwei Xu · Limin Wang · Ruimao Zhang · Tao Chen · Gang Yu · Rynson Lau · Shuguang Cui · Zhen Li
Lightning Talk
Jinzhi Zhang · Hao Jiang · Hongrui Cai · Qi Yi · Yang Jin · Zhi Tian · Rui Zhang · Wanquan Feng · Xiangxiang Chu · Ruofan Tang · yongzhi li · Yadong Mu · Zehuan Yuan · shaohui peng · Zheng Cao · Xiaoming Wang · Xuetao Feng · Xiaolin Wei · Jiaming Guo · Yadong Mu · Yan Wang · Jing Xiao · Xing Hu · Chunhua Shen · Ruqi Huang · Juyong Zhang · Zidong Du · LU FANG · xishan zhang · Qi Guo · Yunji Chen

Spotlight: Featured Papers Panels 3B Wed 7 Dec 11:00 a.m.  

Each panel session is split into four 30-minute blocks composed of a set of lightning talks and a deep dive session related to similar topics. The deep dive will begin immediately after lightning talks and the related Q&A (it might be before the 15 min are over). We will not take any questions via microphone but ask you to use slido (see embedding below or go to https://slido.com and use keyword #neurips22). If you are a presenter or moderator, you should see a zoom link that you can use to join the session for Q&A.

Finally some important don'ts: DO NOT share any zoom or slido information publicly. DO NOT join zoom if you are not presenting or moderating.

Lightning Talk
Tianying Ji · Tongda Xu · Giulia Denevi · Aibek Alanov · Martin Wistuba · Wei Zhang · Yuesong Shen · Massimiliano Pontil · Vadim Titov · Yan Wang · Yu Luo · Daniel Cremers · Yanjun Han · Arlind Kadra · Dailan He · Josif Grabocka · Zhengyuan Zhou · Fuchun Sun · Carlo Ciliberto · Dmitry Vetrov · Mingxuan Jing · Chenjian Gao · Aaron Flores · Tsachy Weissman · Han Gao · Fengxiang He · Kunzan Liu · Wenbing Huang · Hongwei Qin
Lightning Talk
Yu Huang · Tero Karras · Maxim Kodryan · Shiau Hong Lim · Shudong Huang · Ziyu Wang · Siqiao Xue · ILYAS MALIK · Ekaterina Lobacheva · Miika Aittala · Hongjie Wu · Yuhao Zhou · Yingbin Liang · Xiaoming Shi · Jun Zhu · Maksim Nakhodnov · Timo Aila · Yazhou Ren · James Zhang · Longbo Huang · Dmitry Vetrov · Ivor Tsang · Hongyuan Mei · Samuli Laine · Zenglin Xu · Wentao Feng · Jiancheng Lv
Lightning Talk
Sitao Luan · Zhiyuan You · Ruofan Liu · Linhao Qu · Yuwei Fu · Jiaxi Wang · Chunyu Wei · Jian Liang · xiaoyuan luo · Di Wu · Yun Lin · Lei Cui · Ji Wu · Chenqing Hua · Yujun Shen · Qincheng Lu · XIANGLIN YANG · Benoit Boulet · Manning Wang · Di Liu · Lei Huang · Fei Wang · Kai Yang · Jiaqi Zhu · Jin Song Dong · Zhijian Song · Xin Lu · Mingde Zhao · Shuyuan Zhang · Yu Zheng · Xiao-Wen Chang · Xinyi Le · Doina Precup
Lightning Talk
Guanghu Yuan · Yijing Liu · Li Yang · Yongri Piao · Zekang Zhang · Yaxin Xiao · Lin Chen · Yinqi Li · Fajie Yuan · Guangyu Gao · Hong Chang · Qinxian Liu · Zhixiang Wei · Qingqing Ye · Chenyang Lu · Jian Meng · Haibo Hu · Xin Jin · Yudong Li · Miao Zhang · Zhiyuan Fang · Jae-sun Seo · Bingpeng MA · Jian-Wei Zhang · Shiguang Shan · Haozhe Feng · Huaian Chen · Deliang Fan · Huadi Zheng · Jianbo Jiao · Huchuan Lu · Beibei Kong · Miao Zheng · Chengfang Fang · Shujie Li · Zhongwei Wang · Yunchao Wei · Xilin Chen · Jie Shi · Kai Chen · Zihan Zhou · Lei Chen · Yi Jin · Wei Chen · Min Yang · Chenyun YU · Bo Hu · Zang Li · Yu Xu · Xiaohu Qie

Featured Papers Panels 3C Wed 7 Dec 11:00 a.m.  

Each panel session is split into four 30-minute blocks composed of a set of lightning talks and a deep dive session related to similar topics. The deep dive will begin immediately after lightning talks and the related Q&A (it might be before the 15 min are over). We will not take any questions via microphone but ask you to use slido (see embedding below or go to https://slido.com and use keyword #neurips22). If you are a presenter or xtor, you should see a zoom link that you can use to join the session for Q&A.

Finally some important don'ts: DO NOT share any zoom or slido information publicly. DO NOT join zoom if you are not presenting or moderating.


Expo Workshop: PyTorch: New advances for large-scale training and performance optimizations Wed 7 Dec 11:30 a.m.  

Geeta Chauhan · Rohan Varma · Ke Wen · Taylor Robie · Andrew Gu · Anupam Bhatnagar · Bin Bao · Natalia Gimelshein · Animesh Jain · Sherlock Huang

[ protected link dropped ]

Large language models and Generative AI have been key drivers for new innovations in large-scale training and performance optimizations. In this workshop, we will dive deeper into new features and solutions in PyTorch that enable training and performance optimizations @ scale.

Following topics will be covered by the PyTorch team in this workshop. The sessions are divided over two days, Nov 28th will cover the PyTorch Distributed and Profiling topics, and Dec: 5th session will cover the PyTorch Compiler based solutions.

## Part 1: Nov 28 (Hybrid, in-person and remote), 9:30a-12:30p CST (UTC-6), Room # 291
-------------------------------------------------------------------------------------------------------

1. FSDP Production Readiness, Speakers: Rohan Varma, Andrew Gu
We will dive deep into recent advances in FSDP which have enabled better throughput, memory savings and extensibility. These improvements have unblocked using FSDP for models of different modalities and varying sizes(model and data). We will share best practices to apply these features to specific use cases such as XLMR, FLAVA, ViT, DHEN and GPT3 style models.

2. Automated Pipeline Parallelism for PyTorch, Speaker: Ke Wen
PiPPy is a library that provides automated pipeline parallelism for PyTorch models. PiPPy consists of a compiler stack capable of automatically splitting a model into stages without requiring intrusive code changes to the model. It also provides a distributed runtime that helps users to distribute the split stages to multiple devices and multiple hosts and orchestrates micro-batch execution in an overlapped fashion. We are going to demonstrate the use of PiPPy for Hugging Face models on clouds.

3. PyTorch Profiler, Speaker: Taylor Robie
Dive into recent enhancements to the PyTorch profiler capabilities, Python function tracing, data flow capture, and memory profiling, and how they enable previously impossible performance analysis.

4. Profiling Distributed Training Workloads, Speaker: Anupam Bhatnagar
We will present Holistic Trace Analysis (HTA), a tool to identify computation, communication and memory bottlenecks in distributed training. HTA identifies these bottlenecks by analyzing the traces collected using the PyTorch Profiler.

5. TorchBench, Speaker: Xu Zhao
In this talk we present PyTorch Benchmark(TorchBench), a benchmarking suite to provide quick and stable performance signals to hold the line of performance in PyTorch development. TorchBench identifies performance regressions and provides CI services for PyTorch developers to test their PRs. It can also be used to profile specific models and identify optimization opportunities.


## Part 2: Dec 5 (Virtual), 9:30a - 11:30a PST (UTC-8) / 11:30a - 1:30p CST (UTC-6)
------------------------------------------------------------------------------------------------

6. A deep dive into TorchDynamo, Speaker: Animesh Jain
This talk presents a deep dive into TorchDynamo. TorchDynamo is a Python-level JIT compiler designed to make unmodified PyTorch programs faster. It rewrites Python bytecode in order to extract sequences of PyTorch operations into a graph which is then just-in-time compiled with a customizable backend. It is designed to mix Python execution with compiled backends to get the best of both worlds: usability and performance

7. A deep dive into TorchInductor, Speakers: Bin Bao, Natalia Gimelshein
This talk presents a deep dive into the design principles of TorchInductor, pytorch compiler backend, the lowering stack that it uses to transform pytorch programs, and the optimization techniques and codegen technologies that it uses.

8: How do backends integrate to PyTorch compiler stack, Speaker: Sherlock Huang
This talk deep dives into the backend integration points in Pytorch compiler stack. It will explain three types of IR used across the stack, torch IR produced by Dynamo, AtenIR produced by AoTAutograd, and loop-level IR used in Inductor. It will introduce the infrastructure and utilities available for backend integration, including a IR-agnostic Pattern Matcher and a Graph Partitioner.


Spotlight: Featured Papers Panels 4B Wed 7 Dec 07:00 p.m.  

Each panel session is split into four 30-minute blocks composed of a set of lightning talks and a deep dive session related to similar topics. The deep dive will begin immediately after lightning talks and the related Q&A (it might be before the 15 min are over). We will not take any questions via microphone but ask you to use slido (see embedding below or go to https://slido.com and use keyword #neurips22). If you are a presenter or moderator, you should see a zoom link that you can use to join the session for Q&A.

Finally some important don'ts: DO NOT share any zoom or slido information publicly. DO NOT join zoom if you are not presenting or moderating.

Lightning Talk
Alexandra Senderovich · Zhijie Deng · Navid Ansari · Xuefei Ning · Yasmin Salehi · Xiang Huang · Chenyang Wu · Kelsey Allen · Jiaqi Han · Nikita Balagansky · Tatiana Lopez-Guevara · Tianci Li · Zhanhong Ye · Zixuan Zhou · Feng Zhou · Ekaterina Bulatova · Daniil Gavrilov · Wenbing Huang · Dennis Giannacopoulos · Hans-peter Seidel · Anton Obukhov · Kimberly Stachenfeld · Hongsheng Liu · Jun Zhu · Junbo Zhao · Hengbo Ma · Nima Vahidi Ferdowsi · Zongzhang Zhang · Vahid Babaei · Jiachen Li · Alvaro Sanchez Gonzalez · Yang Yu · Shi Ji · Maxim Rakhuba · Tianchen Zhao · Yiping Deng · Peter Battaglia · Josh Tenenbaum · Zidong Wang · Chuang Gan · Changcheng Tang · Jessica Hamrick · Kang Yang · Tobias Pfaff · Yang Li · Shuang Liang · Min Wang · Huazhong Yang · Haotian CHU · Yu Wang · Fan Yu · Bei Hua · Lei Chen · Bin Dong
Lightning Talk
Artem Moskalev · Weixia Zhang · Vudtiwat Ngampruetikorn · Anna Sepliarskaia · Dingquan Li · David Schwab · Ivan Sosnovik · Xiongkuo Min · Arnold Smeulders · Guangtao Zhai · Guodong Guo · Xiaokang Yang · Kede Ma
Lightning Talk
Zicheng Zhang · Mancheng Meng · Antoine Guedon · Yue Wu · Wei Mao · Zaiyu Huang · Peihao Chen · Shizhe Chen · yongwei chen · Keqiang Sun · Yi Zhu · chen rui · Hanhui Li · Dongyu Ji · Ziyan Wu · miaomiao Liu · Pascal Monasse · Yu Deng · Shangzhe Wu · Pierre-Louis Guhur · Jiaolong Yang · Kunyang Lin · Makarand Tapaswi · Zhaoyang Huang · Terrence Chen · Jiabao Lei · Jianzhuang Liu · Vincent Lepetit · Zhenyu Xie · Richard I Hartley · Dinggang Shen · Xiaodan Liang · Runhao Zeng · Cordelia Schmid · Michael Kampffmeyer · Mathieu Salzmann · Ning Zhang · Fangyun Wei · Yabin Zhang · Fan Yang · Qifeng Chen · Wei Ke · Quan Wang · Thomas Li · qingling Cai · Kui Jia · Ivan Laptev · Mingkui Tan · Xin Tong · Hongsheng Li · Xiaodan Liang · Chuang Gan
Lightning Talk
Ziyue Jiang · Zeeshan Khan · Yuxiang Yang · Chenze Shao · Yichong Leng · Zehao Yu · Wenguan Wang · Xian Liu · Zehua Chen · Yang Feng · Qianyi Wu · James Liang · C.V. Jawahar · Junjie Yang · Zhe Su · Songyou Peng · Yufei Xu · Junliang Guo · Michael Niemeyer · Hang Zhou · Zhou Zhao · Makarand Tapaswi · Dongfang Liu · Qian Yang · Torsten Sattler · Yuanqi Du · Haohe Liu · Jing Zhang · Andreas Geiger · Yi Ren · Long Lan · Jiawei Chen · Wayne Wu · Dahua Lin · Dacheng Tao · Xu Tan · Jinglin Liu · Ziwei Liu · 振辉 叶 · Danilo Mandic · Lei He · Xiangyang Li · Tao Qin · sheng zhao · Tie-Yan Liu

Spotlight: Featured Papers Panels 4A Wed 7 Dec 07:00 p.m.  

Each panel session is split into four 30-minute blocks composed of a set of lightning talks and a deep dive session related to similar topics. The deep dive will begin immediately after lightning talks and the related Q&A (it might be before the 15 min are over). We will not take any questions via microphone but ask you to use slido (see embedding below or go to https://slido.com and use keyword #neurips22). If you are a presenter or moderator, you should see a zoom link that you can use to join the session for Q&A.

Finally some important don'ts: DO NOT share any zoom or slido information publicly. DO NOT join zoom if you are not presenting or moderating.

Lightning Talk
Jiawei Huang · Su Jia · Abdurakhmon Sadiev · Ruomin Huang · Yuanyu Wan · Denizalp Goktas · Jiechao Guan · Andrew Li · Wei-Wei Tu · Li Zhao · Amy Greenwald · Jiawei Huang · Dmitry Kovalev · Yong Liu · Wenjie Liu · Peter Richtarik · Lijun Zhang · Zhiwu Lu · R Ravi · Tao Qin · Wei Chen · Hu Ding · Nan Jiang · Tie-Yan Liu
Lightning Talk
Barakeel Fanseu Kamhoua · Hualin Zhang · Taiki Miyagawa · Tomoya Murata · Xin Lyu · Yan Dai · Elena Grigorescu · Zhipeng Tu · Lijun Zhang · Taiji Suzuki · Wei Jiang · Haipeng Luo · Lin Zhang · Xi Wang · Young-San Lin · Huan Xiong · Liyu Chen · Bin Gu · Jinfeng Yi · Yongqiang Chen · Sandeep Silwal · Yiguang Hong · Maoyuan Song · Lei Wang · Tianbao Yang · Han Yang · MA Kaili · Samson Zhou · Deming Yuan · Bo Han · Guodong Shi · Bo Li · James Cheng
Lightning Talk
Zhihan Gao · Yabin Wang · Xingyu Qu · Luziwei Leng · Mingqing Xiao · Bohan Wang · Yu Shen · Zhiwu Huang · Xingjian Shi · Qi Meng · Yupeng Lu · Diyang Li · Qingyan Meng · Kaiwei Che · Yang Li · Hao Wang · Huishuai Zhang · Zongpeng Zhang · Kaixuan Zhang · Xiaopeng Hong · Xiaohan Zhao · Di He · Jianguo Zhang · Yaofeng Tu · Bin Gu · Yi Zhu · Ruoyu Sun · Yuyang (Bernie) Wang · Zhouchen Lin · Qinghu Meng · Wei Chen · Wentao Zhang · Bin CUI · Jie Cheng · Zhi-Ming Ma · Mu Li · Qinghai Guo · Dit-Yan Yeung · Tie-Yan Liu · Jianxing Liao
Lightning Talk
Yunhao Tang · LING LIANG · Thomas Chau · Daeha Kim · Junbiao Cui · Rui Lu · Lei Song · Byung Cheol Song · Andrew Zhao · Remi Munos · Łukasz Dudziak · Jiye Liang · Ke Xue · Kaidi Xu · Mark Rowland · Hongkai Wen · Xing Hu · Xiaobin Huang · Simon Du · Nicholas Lane · Chao Qian · Lei Deng · Bernardo Avila Pires · Gao Huang · Will Dabney · Mohamed Abdelfattah · Yuan Xie · Marc Bellemare

Featured Papers Panels 4C Wed 7 Dec 07:00 p.m.  

Each panel session is split into four 30-minute blocks composed of a set of lightning talks and a deep dive session related to similar topics. The deep dive will begin immediately after lightning talks and the related Q&A (it might be before the 15 min are over). We will not take any questions via microphone but ask you to use slido (see embedding below or go to https://slido.com and use keyword #neurips22). If you are a presenter or moderator, you should see a zoom link that you can use to join the session for Q&A.

Finally some important don'ts: DO NOT share any zoom or slido information publicly. DO NOT join zoom if you are not presenting or moderating.


Competition: NL4Opt: Formulating Optimization Problems Based on Their Natural Language Descriptions Wed 7 Dec 07:00 p.m.  

Rindranirina Ramamonjison · Timothy Yu · Giuseppe Carenini · Bissan Ghaddar · Raymond Li · Shiqi He · Haley Li · Amin Banitalebi · Zirui Zhou · Yong Zhang

We propose a competition for extracting the meaning and formulation of an optimization problem based on its text description. For this competition, we have created the first dataset of linear programming (LP) word problems. A deep understanding of the problem description is an important first step towards generating the problem formulation. Therefore, we present two challenging sub-tasks for the participants. For the first sub-task, the goal is to recognize and label the semantic entities that correspond to the components of the optimization problem. For the second sub-task, the goal is to generate a meaning representation (i.e. a logical form) of the problem from its description and its problem entities. This intermediate representation of an LP problem will be converted to a canonical form for evaluation. The proposed task will be attractive because of its compelling application, the low-barrier to entry of the first sub-task, and the new set of challenges the second sub-task brings to semantic analysis and evaluation. The goal of this competition is to increase the access and usability of optimization solvers, allowing non-experts to solve important problems from various industries. In addition, this new task will promote the development of novel machine learning applications and datasets for operations research.