u'
\n

NIPS 2018 Competition Track

\n\n

This is the second NIPS edition on "NIPS Competitions". We received 21 competition proposals related to data-driven and live competitions on different aspects of NIPS. Proposals were reviewed by several high qualified researchers and experts in challenges organization. Eight top-scored competitions were accepted to be run and present their results during the NIPS 2018 Competition track days. Evaluation was based on the quality of data, problem interest and impact, promoting the design of new models, and a proper schedule and managing procedure. Below, you can find the eight accepted competitions. Please visit each competition webpage to read more about the competition, its schedule, and how to participate. Each competition has its own schedule defined by its organizers. The results of the competitions, including organizers and top ranked participants talks will be presented during the 2 Competition track days at NIPS 2018. Organizers and participants will be invited to submit their contribution as a book chapter to the upcoming NIPS 2018 Competition book, within Springer Series in Challenges in Machine Learning.

\n\xa0\n\n\n\t\n\t\t\n\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\n\t\t\n\t\t\n\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\n\t\t\n\t\t\n\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\n\t\t\n\t\t\n\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\n\t\t\n\t\t\n\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\n\t\t\n\t\t\n\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\n\t\t\n\t\t\n\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\n\t\t\n\t\t\n\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\n\t\t\n\t\t\n\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\n\t\t\n\t\n
CompetitionStart date (2018)End date (2018)Prize
\n\t\t\t

AutoML for Lifelong Machine Learning

\n\t\t\t
July 23stNov 6th\n\t\t\t

1st Place $10,000

\n\n\t\t\t

2nd Place $3,000

\n\n\t\t\t

3rd Place $2,000

\n\t\t\t
Adversarial Vision ChallengeJuly 2ndOct 10th\xa0
\n\t\t\t

The Conversational Intelligence Challenge 2 (ConvAI2)

\n\n\t\t\t

\xa0

\n\t\t\t
March 21stSep 30th\n\t\t\t
1st Place\xa0
\n\n\t\t\t
$20,000 of MTurk funding
\n\t\t\t
\n\t\t\t

Tracking Machine Learning Challenge

\n\t\t\t
Sep 7th\n\t\t\t

March 12th, 2019

\n\n\t\t\t

First phase results already posted

\n\t\t\t
\n\t\t\t

1st Place $7,000

\n\n\t\t\t

2nd Place $5,000

\n\n\t\t\t

3rd Place $3,000

\n\n\t\t\t

Jury prizes : NVIDIA V100 GPU, 2 travel grant to NIPS and CERN

\n\t\t\t
PommermanJune 1stNov 21th1st Place $4,000, 6k GCE credit\n\t\t\t
2nd Place $2,000, 4k GCE credit
\n\n\t\t\t
3rd Place $1,000, 2k GCE credit
\n\n\t\t\t
4th Place $3k GCE credit
\n\n\t\t\t
Top two learning agents: 1 NVIDIA Titan V GPU
\n\t\t\t
\n\t\t\t

InclusiveImages: A challenge of distributional skew, side information, and global inclusion

\n\t\t\t
Sep 5thNov 9th\xa0
The AI Driving OlympicsOct 1st\n\t\t\t

Dec 1th

\n\n\t\t\t

Final at live event, 8th Dec.

\n\t\t\t
\n\t\t\t

1st Place $5,000 AWS Credits

\n\n\t\t\t

2nd Place $2,500 AWS Credits

\n\n\t\t\t

3rd Place $1,000 AWS Credits

\n\t\t\t
\n\t\t\t

AI for prosthetics

\n\t\t\t
June 1stSep 30th\n\t\t\t
\n\t\t\t
1st Place: 2 x NVIDIA Titan V,\xa0 travel grants to NIPS, EPFL, Stanford
\n\t\t\t2nd Place: 1 x NVIDIA Titan V\n\n\t\t\t
\n\t\t\t
3rd Place: 1 x NVIDIA Titan V
\n\n\t\t\t
top 400 participants by 08/15: $250 Google Cloud Credits
\n\t\t\t
\n\t\t\t
\n\t\t\t
\xa0
\n\n

\xa0

\n\n

More details below!

\n\xa0\n\n

AutoML for Lifelong Machine Learning

\n\n

Competition summary: In many real-world machine learning applications, AutoML is strongly needed due to the limited machine learning expertise of developers. Moreover, batches of data in many real-world applications may be arriving daily, weekly, monthly, or yearly, for instance, and the data distributions are changing relatively slowly over time. This presents a continuous learning, or Lifelong Machine Learning challenge for an AutoML system. Typical learning problems of this kind include customer relationship management, on-line advertising, recommendation, sentiment analysis, fraud detection, spam filtering, transportation monitoring, econometrics, patient monitoring, climate monitoring, and manufacturing and so on. In this competition, which we are calling AutoML for Lifelong Machine Learning, large scale datasets collected from some of these real-world applications will be used. Compared with previous AutoML competitions(http://automl.chalearn.org/), the focus of this competition is on drifting concepts, getting away from the simpler i.i.d. cases. Participants are invited to design a computer program capable of autonomously (without any human intervention) developing predictive models that are trained and evaluated in a in lifelong machine learning setting.

\n\n

Organizers:

\n\n
\n
Wei-Wei Tu, 4Paradigm Inc., China, tuww.cn@gmail.com
\n\n
Hugo Jair Escalante, INAOE (Mexico), ChaLearn (USA),\xa0 hugo.jair@gmail.com
\n\n
Isabelle Guyon, UPSud/INRIA Univ. Paris-Saclay, France & ChaLearn, USA,\xa0 guyon@clopinet.com
\n\n
Daniel L. Silver, Acadia University, Canada,\xa0 danny.silver@acadiau.ca
\n\n
Evelyne Viegas, Microsoft, USA,\xa0 evelynev@microsoft.com
\n\n
Yuqiang Chen, 4Paradigm Inc., China, chenyuqiang@4paradigm.com
\n\n
Qiang Yang, 4Paradigm Inc., China,\xa0qyang@cse.ust.hk
\n\n
\xa0
\n\n

Webpage: https://www.4paradigm.com/competition/nips2018

\n
\n\n

\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0

\n\n

Adversarial Vision Challenge

\n\n

Competition summary: This challenge is designed to facilitate measurable progress towards robust machine vision models and more generally applicable adversarial attacks. Modern machine vision algorithms are extremely susceptible to small and almost imperceptible perturbations of their inputs (so-called adversarial examples). This property reveals an astonishing difference in the information processing of humans and machines and raises security concerns for many deployed machine vision systems like autonomous cars. Improving the robustness of vision algorithms is thus important to close the gap between human and machine perception and to enable safety-critical applications. In a robust network no attack should be able to find imperceptible adversarial perturbations. We thus propose to facilitate an open competition between neural networks and a large variety of strong attacks, including ones that did not exist at the time when the networks have been proposed. To this end the competition has one track for robust vision models as well as one track for targeted and one for untargeted adversarial attacks. Submitted models and attacks are continuously pitted against each other on an image classification task. Attacks are able to observe the decision of models on a restricted number of self-defined inputs in order to craft model-specific minimal adversarial examples.

\n\n

Organizers:

\n\n

Wieland Brendel, University of T\xfcbingen, wieland@bethgelab.org
\nJonas Rauber, University of T\xfcbingen, jonas@bethgelab.org
\nAlexey Kurakin, Google Brain, kurakin@google.com
\nNicolas Papernot, Pennsylvania State University & Google Brain,ngp5056@cse.psu.edu
\nBehar Veliqi,University of T\xfcbingen, behar.veliqi@bethgelab.org
\nMarcel Salathe, Ecole Polytechnique Federale de Lausanne, marcel.salathe@epfl.ch
\nSharada P. Mohanty, Ecole Polytechnique Federale de Lausanne, sharada.mohanty@epfl.ch
\nMatthias Bethge, University of T\xfcbingen,matthias@bethgelab.org

\n\n

Webpage: https://www.crowdai.org/challenges/adversarial-vision-challenge

\n\n

\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0

\n\n

The Conversational Intelligence Challenge 2 (ConvAI2)

\n\n

Competition summary: There are currently few datasets appropriate for training and evaluating models for non-goal-oriented dialogue systems (chatbots); and equally problematic, there is currently no standard procedure for evaluating such models. Our competition aims to establish a concrete scenario for testing chatbots that engage humans, and become a standard evaluation tool in order to make such systems directly comparable.This is the second Conversational Intelligence (ConvAI) Challenge.\xa0 This year we introduce several improvements: a) providing a dataset from the beginning, Persona-Chat, b) making the conversations more engaging for humans, c) simpler evaluation process (automatic evaluation, followed then by human evaluation). Persona-Chat is\xa0designed to facilitate research into alleviating some of the issues that traditional chit-chat models face. The\xa0training set consists of conversations between crowdworkers who were randomly paired and asked to act the part of a given provided persona (randomly assigned, and created by another set of crowdworkers). The paired workers were asked to chat naturally and to get to know each other during the conversation. This produces interesting and engaging conversations that learning agents can try to mimic. Models are thus trained to both ask and answer questions about personal topics, and the resulting dialogue can take account of the personas of the speaking partners. Competitors\' models will then be compared in three ways: (i) automated evaluation metrics on a new test set hidden from the competitors; (ii) evaluation on Amazon Mechanical Turk; and (iii) `wild\' live evaluation by volunteers having conversations with the bots. The winning dialogue systems will be chosen based on these scores.

\n\n

Organizers:

\n\n
\n
Mikhail Burtsev,\xa0Moscow Institute of Physics,\xa0burtsev.m@gmail.com
\n\n
Varvara Logacheva\xa0,\xa0Moscow Institute of Physics,\xa0varvara.logacheva@gmail.com
\n\n
Valentin Malykh,\xa0Moscow Institute of Physics,\xa0\xa0valentin@maly.hk
\n\n
Iulian Serban,\xa0University of Montreal,\xa0\xa0julianserban@gmail.com
\n\n
Ryan Lowe,\xa0 McGill University,\xa0lowe.ryan.t@gmail.com
\n\n
Shrimai Prabhumoye,\xa0Carnegie Mellon University,\xa0sprabhum@andrew.cmu.edu
\n\n
Alan W Black,\xa0Carnegie Mellon University,\xa0awb@cs.cmu.edu
\n\n
Alexander Rudnicky,\xa0Carnegie Mellon University,\xa0air@cs.cmu.edu
\n\n
Jason Williams, Microsoft Research,\xa0jason.williams@microsoft.com
\n\n
\n
Yoshua Bengio,\xa0University of Montreal,\xa0yoshua.umontreal@gmail.com
\nJoelle Pineau,\xa0Facebook AI Research & McGill University,\xa0jpineau@fb.com\xa0\xa0
\n\n
Emily Dinan, Facebook AI Research,\xa0\xa0edinan@fb.com
\n\n
Douwe Kiela, Facebook AI Research,\xa0dkiela@fb.com
\n\n
Alexander Miller, Facebook AI Research, ahm@fb.com
\n\n
Kurt Shuster, Facebook AI Research,\xa0kshuster@fb.com
\n\n
Arthur Szlam, Facebook AI Research, aszlam@fb.com
\n\n
Jack Urbanek, Facebook AI Research, jju@fb.com
\n\n
Jason Weston, Facebook AI Research, jase@fb.com
\n\n
\xa0
\n
\n\n

Webpage: http://convai.io/\xa0

\n\n

\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0

\n\n

Tracking Machine Learning Challenge

\n\n

Competition summary: In the footstep of the Higgs (https://www.kaggle.com/c/higgs-boson) and the flavor of physics (https://www.kaggle.com/c/flavours-of-physics) challenges, data science is being asked for provide novel ideas on how to make science advance. Particle track reconstruction is at the heart of the data processing of the experiments at CERN, and a challenging computational exercise. Contrary to first impression, clustering hundreds of thousands of sparsely 3D points into helicoidal tracks of 10-15 points is non-trivial due to combinatorial explosion during particle following. In order to fully extract the potential of collider data, and enable future scientific discoveries you will have to overcome this throughput oriented challenge and provide solutions that ran within seconds on hundreds of thousands of points. This truly unique challenge will require all your creativity and computing skills to master. In addition, the submissions will be evaluated by a jury (composed of computer scientists and High Energy Physics tracking experts) to highlight the contributions most promising to the field. A special prize (NVidia V100) from our sponsor will be attributed. Invitation to NIPS 2018 and a grand finale workshop at CERN in spring 2019 for winners and jury\u2019s pick.

\n\n

Organizers:

\n\n

David Rousseau, LAL, rousseau@lal.in2p3.fr
\nSabrina Amrouche, UNIGE, c.amrouche@cern.ch
\nPaolo Calafiura, LBNL, pcalafiura@lbl.gov
\nSteve Farrell, LBNL, sfarrell@lbl.gov
\nCecile Germain, UPsud & INRIA, cecile.germain@lri.fr
\nVladimir Gligorov, LPNHE, vgligoro@lpnhe.in2p3.fr
\nTobias Golling, UNIGE, tobias.golling@unige.ch
\nHeather Gray, LBNL, hgray@lbl.gov
\nIsabelle Guyon, UPsud & INRIA, Chalearn, guyon@clopinet.com
\nMikhail Hushchyn, NRU HSE, mikhail.hushchyn@cern.ch
\nVincenzo Innocente, CERN, vincenzo.innocente@cern.ch
\nMoritz Kiehn, UNIGE, msmk@cern.ch
\nAndreas Salzburger, CERN, andreas.salzburger@cern.ch
\nAndrey Ustyuzhanin, NRU HSE, andrey.ustyuzhanin@cern.ch
\nJean-Roch Vlimant, Caltech, jvlimant@caltech.edu
\nYetkin Yilmaz, LAL, yetkinyilmaz@gmail.com

\n\n

Webpage: https://sites.google.com/site/trackmlparticle/

\n\n

\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0

\n\n

Pommerman

\n\n

Competition summary:\xa0 Train a team of communicative agents to play Bomberman. Compete against other teams.

\n\n

Organizers:

\n\n
\n
Cinjon Resnick, NYU, cinjon@nyu.edu
\n\n
David Ha, Google Brain,\xa0hadavid@google.com
\n\n
Denny Britz, Prediction Machines, dennybritz@gmail.com
\n
\n\n
\n
\n
\n
Jakob Foerster, Oxford,\xa0jakobfoerster@gmail.com
\n\n
\n
Jason Weston, Facebook FAIR,\xa0jase@fb.com
\n
\nJoan Bruna, NYU,\xa0bruna@cims.nyu.edu
\n\n
Julian Togelius, NYU,\xa0julian@togelius.com
\n\n

Kyunghyun Cho, NYU,\xa0kyunghyun.cho@nyu.edu

\n\n

Webpage: https://www.pommerman.com/\xa0

\n
\n
\n\n

\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0

\n\n

InclusiveImages: A challenge of distributional skew, side information, and global inclusion

\n\n

Competition summary: Questions surrounding machine learning fairness and inclusivity have attracted heightened attention in recent years, leading to a rapid emergence of a full area of research within the field of machine learning.\xa0 To provide additional empirical grounding and a venue for head-to-head comparison of new methods, the InclusiveImages competition encourages researchers to develop modeling techniques that reduce the biases that may be encoded in large data sets.\xa0 In particular, this competition is focused on the challenge of geographic skew encountered when the geographic distribution of training images does not fully represent levels of diversity encountered at test or inference time.

\n\n

Organizers:

\n\n

James Atwood
\nEric Breck
\nYoni Halpern
\nD. Sculley
\nErica Greene
\nPeggy Chi
\nAnurag Batra
\nContact: inclusive-images-nips@google.com

\nWebpage: https://sites.google.com/view/inclusiveimages/\xa0\n\n

\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0

\n\n

The AI Driving Olympics

\n\n

Competition summary: Machine Learning (ML), deep learning, and deep reinforcement learning have shown remarkable success on a variety of tasks in the very recent past. However, the ability of these methods to supersede classical approaches on physically embodied agents is still unclear. In particular, it remains to be seen whether learning-based approached can be completely trusted to control safety-critical systems such as self-driving cars. This live competition, presented by the Duckietown Foundation, is designed to explore which approaches work best for what tasks and subtasks in a complex robotic system. The participants will need to design algorithms that implement either part or all of the management and navigation required for a fleet of self-driving miniature taxis. There will be a set of different trials that correspond to progressively more sophisticated behaviors for the cars. These vary in complexity, from the reactive task of lane following to more complex and \u201ccognitive\u201d behaviors, such as obstacle avoidance, point-to-point navigation, and finally coordinating a vehicle fleet while adhering to the entire set of the \u201crules of the road\u201d. We will provide baseline solutions for the tasks based on conventional autonomy architectures; the participants will be free to replace any or all of the components with custom learning-based solutions.The competition will be live at NIPS, but participants will not need to be physically present\u2014they will just need to send their source code packaged as a Docker image. There will be qualifying rounds in simulation and we will make available the use of \u201crobotariums,\u201d which are facilities that allow remote experimentation in a reproducible setting.

\n\n

Organizers:

\n\n

Andrea Censi, nuTonomy and ETH Z\xfcrich, acensi@idsc.mavt.ethz.ch
\nLiam Paull, Universit\xe9 de Montr\xe9al, paulll@iro.umontreal.ca
\nJacopo Tani, ETH Z\xfcrich, tanij@ethz.ch
\nScott Livingston, q@rerobots.net
\nJulian Zilly, ETH Z\xfcrich, jzilly@ethz.ch
\nRuslan Hristov, nuTonomy, rusi@nutonomy.com
\nOscar Beijbom, nuTonomy, oscar@nutonomy.com
\nEryk Nice, nuTonomy, eryk.nice@nutonomy.com
\nSunil Mallya, Amazon, smallya@amazon.com
\nJustin De Castri, Amazon, decastri@amazon.com
\nHsueh-Cheng (Nick) Wang, National Chiao Tung University,\xa0 hchengwang@gmail.com
\nQing-Shan Jia, Tsinghua, jiaqs@tsinghua.edu.cn
\nTao Zhang, Tsinghua , taozhang@tsinghua.edu.cn
\nStefano Soatto, UCLA and Amazon, soattos@amazon.com
\nMagnus Egerstedt, Georgia Tech, magnus.egerstedt@ece.gatech.edu
\nYoshua Bengio, Universit\xe9 de Montr\xe9al, yoshua.bengio@umontreal.ca
\nEmilio Frazzoli, ETH Z\xfcrich and nuTonomy, emilio.frazzoli@idsc.mavt.ethz.ch

\n\n

Webpage: https://AI-DO.duckietown.org\xa0

\n\n

\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0

\n\n

AI for prosthetics

\n\n

Competition summary: Recent advancements in material science and device technology have increased interest in creating prosthetics for improving human movement. Designing these devices, however, is difficult as it is costly and time-consuming to iterate through many designs. In this challenge, we explore using reinforcement learning techniques to train realistic, biomechanical models and approximate movement patterns of a patient with a prosthetic leg. Successful models will be key to better understand the human-prosthesis interaction, which will help to accelerate development of this field.

\n\n

Organizers:

\n\n

\u0141ukasz Kidzi\u0144ski, Stanford, lukasz.kidzinski@stanford.edu
\nCarmichael Ong, Stanford, ongcf@stanford.edu
\nMohanty Sharada, EPFL, sharada.mohanty@epfl.ch
\nJennifer Hicks, Stanford, jenhicks@stanford.edu
\nJoy Ku, Stanford, Stanford, joyku@stanford.edu
\nSean Carroll, EPFL, sean.carroll@epfl.ch
\nSergey Levine, UC Berkeley, svlevine@eecs.berkeley.edu
\nMarcel Salath\xe9, EPFL, marcel.salathe@epfl.ch
\nScott Delp, Stanford, delp@stanford.edu

\n\n

Webpage: https://www.crowdai.org/challenges/nips-2018-ai-for-prosthetics-challenge\xa0

\n\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0 \n\n
\n'