Timezone: »
Challenges in machine learning and data science are open online competitions that address problems by providing datasets or simulated environments. They measure the performance of machine learning algorithms with respect to a given problem. The playful nature of challenges naturally attracts students, making challenges a great teaching resource. However, in addition to the use of challenges as educational tools, challenges have a role to play towards a better democratization of AI and machine learning. They function as cost effective problem-solving tools and a means of encouraging the development of re-usable problem templates and open-sourced solutions. However, at present, the geographic, sociological repartition of challenge participants and organizers is very biased. While recent successes in machine learning have raised much hopes, there is a growing concern that the societal and economical benefits might increasingly be in the power and under control of a few.
CiML (Challenges in Machine Learning) is a forum that brings together workshop organizers, platform providers, and participants to discuss best practices in challenge organization and new methods and application opportunities to design high impact challenges. Following the success of previous years' workshops, we will reconvene and discuss new opportunities for broadening our community.
For this sixth edition of the CiML workshop at NeurIPS our objective is twofold: (1) We aim to enlarge the community, fostering diversity in the community of participants and organizers; (2) We aim to promote the organization of challenges for the benefit of more diverse communities.
The workshop provides room for discussion on these topics and aims to bring together potential partners to organize such challenges and stimulate "machine learning for good", i.e. the organization of challenges for the benefit of society. We have invited prominent speakers that have experience in this domain.
Fri 8:00 a.m. - 8:15 a.m.
|
Welcome and Opening Remarks
(
Opening
)
|
Adrienne Mendrik · Wei-Wei Tu · Isabelle Guyon · Evelyne Viegas · Ming LI 🔗 |
Fri 8:15 a.m. - 9:00 a.m.
|
Amir Banifatemi (XPrize) "AI for Good via Machine Learning Challenges"
(
Invited Talk
)
"AI for Good" efforts (e.g., applications work in sustainability, education, health, financial inclusion, etc.) have demonstrated the capacity to simultaneously advance intelligent system research and the greater good. Unfortunately, the majority of research that could find motivation in real-world "good" problems still center on problems with industrial or toy problem performance baselines. Competitions can serve as an important shaping reward for steering academia towards research that is simultaneously impactful on our state of knowledge and the state of the world. This talk covers three aspects of AI for Good competitions. First, we survey current efforts within the AI for Good application space as a means of identifying current and future opportunities. Next we discuss how more qualitative notions of "Good" can be used as benchmarks in addition to more quantitative competition objective functions. Finally, we will provide notes on building coalitions of domain experts to develop and guide socially-impactful competitions in machine learning. |
Amir Banifatemi 🔗 |
Fri 9:00 a.m. - 9:45 a.m.
|
Emily Bender (University of Washington) "Making Stakeholder Impacts Visible in the Evaluation Cycle: Towards Fairness-Integrated Shared Tasks and Evaluation Metrics"
(
Invited Talk
)
In a typical machine learning competition or shared task, success is measured in terms of systems' ability to reproduce gold-standard labels. The potential impact of the systems being developed on stakeholder populations, if considered at all, is studied separately from system `performance'. Given the tight train-eval cycle of both shared tasks and system development in general, we argue that making disparate impact on vulnerable populations visible in dataset and metric design will be key to making the potential for such impact present and salient to developers. We see this as an effective way to promote the development of machine learning technology that is helpful for people, especially those who have been subject to marginalization. This talk will explore how to develop such shared tasks, considering task choice, stakeholder community input, and annotation and metric design desiderata. Joint work with Hal Daumé III, University of Maryland, Bernease Herman, University of Washington, and Brandeis Marshall, Spelman College. |
Emily M. Bender 🔗 |
Fri 9:45 a.m. - 10:30 a.m.
|
Coffee Break
|
🔗 |
Fri 10:30 a.m. - 11:15 a.m.
|
Dina Machuve (Nelson Mandela African Institution of Science and Technology) “Machine Learning Competitions: The Outlook from Africa”
(
Invited Talk
)
The current AI landscape in Africa mainly focuses on capacity building. The ongoing efforts to strengthen the AI capacity in Africa are organized in summer schools, workshops, meetups, competitions and one long-term program at the Masters level. The main AI initiatives driving the AI capacity building agenda in Africa include a) Deep Learning Indaba, b) Data Science Africa, c) Data Science Nigeria, d) Nairobi Women in Machine Learning and Data Science, e) Zindi and f) The African Master's in Machine Intelligence (AMMI) at AIMS. The talk will summarize our experience on low participation of African AI developers at machine learning competitions and our recommendations to address the current challenges. |
Dina Machuve 🔗 |
Fri 11:15 a.m. - 11:30 a.m.
|
Dog Image Generation Competition on Kaggle
(
Talk
)
We present a novel format of machine learning competitions where a user submits code that generates images trained on training samples, the code then runs on Kaggle, produces dog images, and user receives scores for the performance of their generative content based on 1. quality of images, 2. diversity of images, and 3. memorization penalty. This style of competition targets the usage of Generative Adversarial Networks (GAN)[4], but is open for all generative models. Our implementation addresses overfitting by incorporating two different pre-trained neural networks, as well as two separate "ground truth" image datasets, for the public and private leaderboards. We also have an enclosed compute environment to prevent submissions of non-generated images. In this paper, we describe both the algorithmic and system design of our competition, as well as sharing our lessons learned from running this competition [6] in July 2019 with 900+ teams participating and over 37,000 submissions and their code received. |
Wendy Kan · Phil Culliton 🔗 |
Fri 11:30 a.m. - 11:45 a.m.
|
Learning To Run a Power Network Competition
(
Talk
)
We present the results of the first edition as well as some perspective for a next potential edition of the "Learning To Run a Power Network" (L2RPN) competition to test the potential of Reinforcement Learning to solve a real-world problem of great practical importance: controlling power transportation in power grids while keeping people and equipment safe. |
Benjamin Donnot 🔗 |
Fri 11:45 a.m. - 12:00 p.m.
|
The AI Driving Olympics: An Accessible Robot Learning Benchmark
(
Talk
)
Despite recent breakthroughs, the ability of deep learning and reinforcement learning to outperform traditional approaches to control physically embodied robotic agents remains largely unproven. To help bridge this gap, we have developed the “AI Driving Olympics” (AI-DO), a competition with the objective of evaluating the state-of-the-art in machine learning and artificial intelligence for mobile robotics. Based on the simple and well specified autonomous driving and navigation environment called “Duckietown,” AI-DO includes a series of tasks of increasing complexity—from simple lane-following to fleet management. For each task, we provide tools for competitors to use in the form of simulators, data logs, code templates, baseline implementations, and low-cost access to robotic hardware. We evaluate submissions in simulation online, on standardized hardware environments, and finally at the competition events. We have held successful AI-DO competitions at NeurIPS 2018 and ICRA 2019, and will be holding AI-DO 3 at NeurIPS 2020. Together, these competitions highlight the need for better benchmarks, which are lacking in robotics, as well as improved mechanisms to bridge the gap between simulation and reality. |
Matthew Walter 🔗 |
Fri 12:00 p.m. - 12:15 p.m.
|
Conclusion on TrackML, a Particle Physics Tracking Machine Learning Challenge Combining Accuracy and Inference Speed
(
Talk
)
|
David Rousseau · jean-roch vlimant 🔗 |
Fri 12:15 p.m. - 2:00 p.m.
|
Catered Lunch and Poster Viewing (in Workshop Room)
(
Break, Poster Session
)
link »
Accepted Posters Kandinsky Patterns: An open toolbox for creating explainable machine learning challenges Heimo Muller · Andreas Holzinger MOCA: An Unsupervised Algorithm for Optimal Aggregation of Challenge Submissions Robert Vogel · Mehmet Eren Ahsen · Gustavo A. Stolovitzky FDL: Mission Support Challenge Luís F. Simões · Ben Day · Vinutha M. Shreenath · Callum Wilson From data challenges to collaborative gig science. Coopetitive research process and platform Andrey Ustyuzhanin · Mikhail Belous · Leyla Khatbullina · Giles Strong Smart(er) Machine Learning for Practitioners Prabhu Pradhan Improving Reproducibility of Benchmarks Xavier Bouthillier Guaranteeing Reproducibility in Deep Learning Competitions Brandon Houghton Organizing crowd-sourced AI challenges in enterprise environments: opportunities and challenges Mahtab Mirmomeni · Isabell Kiral · Subhrajit Roy · Todd Mummert · Alan Braz · Jason Tsay · Jianbin Tang · Umar Asif · Thomas Schaffter · Eren Mehmet · Bruno De Assis Marques · Stefan Maetschke · Rania Khalaf · Michal Rosen-Zvi · John Cohn · Gustavo Stolovitzky · Stefan Harrer WikiCities: a Feature Engineering Educational Resource Pablo Duboue Reinforcement Learning Meets Information Seeking: Dynamic Search Challenge Zhiwen Tang · Grace Hui Yang AI Journey 2019: School Tests Solving Competition Alexey Natekin · Peter Romov · Valentin Malykh A BIRDSAI View for Conservation Elizabeth Bondi · Milind Tambe · Raghav Jain · Palash Aggrawal · Saket Anand · Robert Hannaford · Ashish Kapoor · Jim Piavis · Shital Shah · Lucas Joppa · Bistra Dilkina |
Gustavo Stolovitzky · Prabhu Pradhan · Pablo Duboue · Zhiwen Tang · Aleksei Natekin · Elizabeth Bondi-Kelly · Xavier Bouthillier · Stephanie Milani · Heimo Müller · Andreas T. Holzinger · Stefan Harrer · Ben Day · Andrey Ustyuzhanin · William Guss · Mahtab Mirmomeni
|
Fri 2:00 p.m. - 2:45 p.m.
|
Frank Hutter (University of Freiburg) "A Proposal for a New Competition Design Emphasizing Scientific Insights"
(
Invited Talk
)
The typical setup in machine learning competitions is to provide one or more datasets and a performance metric, leaving it entirely up to participants which approach to use, how to engineer better features, whether and how to pretrain models on related data, how to tune hyperparameters, how to combine multiple models in an ensemble, etc. The fact that work on each of these components often leads to substantial improvements has several consequences: (1) amongst several skilled teams, the one with the most manpower and engineering drive often wins; (2) it is often unclear why one entry performs better than another one; and (3) scientific insights remain limited. Based on my experience in both participating in several challenges and also organizing some, I will propose a new competition design that instead emphasizes scientific insight by dividing the various ways in which teams could improve performance into (largely orthogonal) modular components, each of which defines its own competition. E.g., one could run a competition focussing only on effective hyperparameter tuning of a given pipeline (across private datasets). With the same code base and datasets, one could likewise run a competition focussing only on finding better neural architectures, or only better preprocessing methods, or only a better training pipeline, or only better pre-training methods, etc. One could also run multiple of these competitions in parallel, hot-swapping better components found in one competition into the other competitions. I will argue that the result would likely be substantially more valuable in terms of scientific insights than traditional competitions and may even lead to better final performance. |
Frank Hutter 🔗 |
Fri 2:45 p.m. - 3:00 p.m.
|
Design and Analysis of Experiments: A Challenge Approach in Teaching
(
Talk
)
Over the past few years, we have explored the benefits of involving students both in organizing and in participating in challenges as a pedagogical tool, as part of an international collaboration. Engaging in the design and resolution of a competition can be seen as a hands-on means of learning proper design and analysis of experiments and gaining a deeper understanding other aspects of Machine Learning. Graduate students of University Paris-Sud (Paris, France) are involved in class projects in creating a challenge end-to-end, from defining the research problem, collecting or formatting data, creating a starting kit, to implementing and testing the website. The application domains and types of data are extremely diverse: medicine, ecology, marketing, computer vision, recommendation, text processing, etc. The challenges thus created are then used as class projects of undergraduate students who have to solve them, both at University Paris-Sud, and at Rensselaer Polytechnic Institute (RPI, New York, USA), to provide rich learning experiences at scale. New this year, students are involved in creating challenges motivated by “AI for good” and will create re-usable templates to inspire others to create challenges for the benefit of humanity. |
Adrien Pavao 🔗 |
Fri 3:00 p.m. - 3:15 p.m.
|
The model-to-data paradigm: overcoming data access barriers in biomedical competitions
(
Talk
)
Data competitions often rely on the physical distribution of data to challenge participants, a significant limitation given that much data is proprietary, sensitive, and often non-shareable. To address this, the DREAM Challenges has advanced a challenge framework called modelto-data (MTD), requiring participants to submit re-runnable algorithms instead of model predictions. The DREAM organization has successfully completed multiple MTD-based challenges, and is expanding this approach to unlock highly sensitive and non-distributable human data for use in biomedical data challenges. |
Justin Guinney 🔗 |
Fri 3:15 p.m. - 3:30 p.m.
|
The Deep Learning Epilepsy Detection Challenge: Design, Implementation, and Test of a New Crowd-Sourced AI Challenge Ecosystem
(
Talk
)
|
Isabell Kiral 🔗 |
Fri 3:30 p.m. - 4:15 p.m.
|
Coffee Break
|
🔗 |
Fri 4:15 p.m. - 6:00 p.m.
|
Open Space Topic “The Organization of Challenges for the Benefit of More Diverse Communities”
(
Open Space Session
)
“Open Space” is a technique for running meetings where the participants create and manage the agenda themselves. Participants can propose ideas that address the open space topic, these will be divided into various sessions that all other participants can join and brainstorm about. After the open space we will collect all the ideas and post them on the CiML website. |
Adrienne Mendrik · Isabelle Guyon · Wei-Wei Tu · Evelyne Viegas · Ming LI 🔗 |
Author Information
Adrienne Mendrik (Netherlands eScience Center)
Wei-Wei Tu (4Paradigm Inc.)
Wei-Wei Tu (4Paradigm Inc.)
Isabelle Guyon (UPSud, INRIA, University Paris-saclay and ChaLearn)

Isabelle Guyon recently joined Google Brain as a research scientist. She is also professor of artificial intelligence at Université Paris-Saclay (Orsay). Her areas of expertise include computer vision, bioinformatics, and power systems. She is best known for being a co-inventor of Support Vector Machines. Her recent interests are in automated machine learning, meta-learning, and data-centric AI. She has been a strong promoter of challenges and benchmarks, and is president of ChaLearn, a non-profit dedicated to organizing machine learning challenges. She is community lead of Codalab competitions, a challenge platform used both in academia and industry. She co-organized the “Challenges in Machine Learning Workshop” @ NeurIPS between 2014 and 2019, launched the "NeurIPS challenge track" in 2017 while she was general chair, and pushed the creation of the "NeurIPS datasets and benchmark track" in 2021, as a NeurIPS board member.
Evelyne Viegas (Microsoft Research)
Ming LI (Nanjing University)
More from the Same Authors
-
2021 : OmniPrint: A Configurable Printed Character Synthesizer »
Haozhe Sun · Wei-Wei Tu · Isabelle Guyon -
2022 Poster: Online Frank-Wolfe with Arbitrary Delays »
Yuanyu Wan · Wei-Wei Tu · Lijun Zhang -
2022 : Fifteen-minute Competition Overview Video »
Dustin Carrión-Ojeda · Ihsan Ullah · Sergio Escalera · Isabelle Guyon · Felix Mohr · Manh Hung Nguyen · Joaquin Vanschoren -
2023 Competition: NeurIPS 2023 Machine Unlearning Competition »
Eleni Triantafillou · Fabian Pedregosa · Meghdad Kurmanji · Kairan ZHAO · Gintare Karolina Dziugaite · Peter Triantafillou · Ioannis Mitliagkas · Vincent Dumoulin · Lisheng Sun · Peter Kairouz · Julio C Jacques Junior · Jun Wan · Sergio Escalera · Isabelle Guyon -
2023 Affinity Workshop: New in ML »
Zhen Xu · Mélisande Teng · Isabelle Guyon -
2022 Spotlight: Lightning Talks 6B-4 »
Junjie Chen · Chuanxia Zheng · JINLONG LI · Yu Shi · Shichao Kan · Yu Wang · Fermín Travi · Ninh Pham · Lei Chai · Guobing Gan · Tung-Long Vuong · Gonzalo Ruarte · Tao Liu · Li Niu · Jingjing Zou · Zequn Jie · Peng Zhang · Ming LI · Yixiong Liang · Guolin Ke · Jianfei Cai · Gaston Bujia · Sunzhu Li · Siyuan Zhou · Jingyang Lin · Xu Wang · Min Li · Zhuoming Chen · Qing Ling · Xiaolin Wei · Xiuqing Lu · Shuxin Zheng · Dinh Phung · Yigang Cen · Jianlou Si · Juan Esteban Kamienkowski · Jianxin Wang · Chen Qian · Lin Ma · Benyou Wang · Yingwei Pan · Tie-Yan Liu · Liqing Zhang · Zhihai He · Ting Yao · Tao Mei -
2022 Spotlight: Pyramid Attention For Source Code Summarization »
Lei Chai · Ming LI -
2022 Spotlight: Online Frank-Wolfe with Arbitrary Delays »
Yuanyu Wan · Wei-Wei Tu · Lijun Zhang -
2022 Spotlight: Lightning Talks 4A-1 »
Jiawei Huang · Su Jia · Abdurakhmon Sadiev · Ruomin Huang · Yuanyu Wan · Denizalp Goktas · Jiechao Guan · Andrew Li · Wei-Wei Tu · Li Zhao · Amy Greenwald · Jiawei Huang · Dmitry Kovalev · Yong Liu · Wenjie Liu · Peter Richtarik · Lijun Zhang · Zhiwu Lu · R Ravi · Tao Qin · Wei Chen · Hu Ding · Nan Jiang · Tie-Yan Liu -
2022 Competition: Cross-Domain MetaDL: Any-Way Any-Shot Learning Competition with Novel Datasets from Practical Domains »
Dustin Carrión-Ojeda · Ihsan Ullah · Sergio Escalera · Isabelle Guyon · Felix Mohr · Manh Hung Nguyen · Joaquin Vanschoren -
2022 Poster: Meta-Album: Multi-domain Meta-Dataset for Few-Shot Image Classification »
Ihsan Ullah · Dustin Carrión-Ojeda · Sergio Escalera · Isabelle Guyon · Mike Huisman · Felix Mohr · Jan N. van Rijn · Haozhe Sun · Joaquin Vanschoren · Phan Anh Vu -
2022 : Isabelle Guyon »
Isabelle Guyon -
2022 Invited Talk: The Data-Centric Era: How ML is Becoming an Experimental Science »
Isabelle Guyon -
2022 : NeurIPS Competitions – Evolution and Opportunities »
Isabelle Guyon · Evelyne Viegas -
2022 Poster: Pyramid Attention For Source Code Summarization »
Lei Chai · Ming LI -
2022 Mentorship: New In ML »
Zhen Xu · Mélisande Teng · Jie Fu · Romain Egele · Daochen Zha · Minhao Fan · Eulalie Boucher · Alexandra Volokhova · Isabelle Guyon -
2021 Panel: The Role of Benchmarks in the Scientific Progress of Machine Learning »
Lora Aroyo · Samuel Bowman · Isabelle Guyon · Joaquin Vanschoren -
2021 : MetaDL: Few Shot Learning Competition with Novel Datasets from Practical Domains + Q&A »
Adrian El Baz · Isabelle Guyon · Zhengying Liu · Jan N. Van Rijn · Haozhe Sun · Sébastien Treguer · Wei-Wei Tu · Ihsan Ullah · Joaquin Vanschoren · Phan Ahn Vu -
2021 : Learning By Doing: Controlling a Dynamical System using Control Theory, Reinforcement Learning, or Causality + Q&A »
Sebastian Weichwald · Niklas Pfister · Dominik Baumann · Isabelle Guyon · Oliver Kroemer · Tabitha Lee · Søren Wengel Mogensen · Jonas Peters · Sebastian Trimpe -
2021 Poster: Dual Adaptivity: A Universal Algorithm for Minimizing the Adaptive Regret of Convex Functions »
Lijun Zhang · Guanghui Wang · Wei-Wei Tu · Wei Jiang · Zhi-Hua Zhou -
2020 : Keynote talk by Isabelle Guyon and Evelyne Viegas - "AI Competitions and the Science Behind Contests" »
Isabelle Guyon · Evelyne Viegas -
2020 Workshop: ML Competitions at the Grassroots (CiML 2020) »
Tara Chklovski · Adrienne Mendrik · Amir Banifatemi · Gustavo Stolovitzky -
2020 Poster: Deep Statistical Solvers »
Balthazar Donon · Zhengying Liu · Wenzhuo LIU · Isabelle Guyon · Antoine Marot · Marc Schoenauer -
2019 : Open Space Topic “The Organization of Challenges for the Benefit of More Diverse Communities” »
Adrienne Mendrik · Isabelle Guyon · Wei-Wei Tu · Evelyne Viegas · Ming LI -
2019 : Welcome and Opening Remarks »
Adrienne Mendrik · Wei-Wei Tu · Isabelle Guyon · Evelyne Viegas · Ming LI -
2018 : Afternoon Welcome - Isabelle Guyon and Evelyne Viegas »
Isabelle Guyon -
2018 : Beyond the Leaderboard, Adrienne M. Mendrik, Stephen R. Aylward »
Adrienne Mendrik · Stephen R Aylward -
2018 Workshop: CiML 2018 - Machine Learning competitions "in the wild": Playing in the real world or in real time »
Isabelle Guyon · Evelyne Viegas · Sergio Escalera · Jacob D Abernethy -
2018 : Morning Welcome - - Isabelle Guyon and Evelyne Viegas »
Evelyne Viegas -
2018 : Datasets and Benchmarks for Causal Learning »
Csaba Szepesvari · Isabelle Guyon · Nicolai Meinshausen · David Blei · Elias Bareinboim · Bernhard Schölkopf · Pietro Perona -
2018 : AutoML3 - LifeLong ML with concept drift Challenge: Overview and award ceremony »
Hugo Jair Escalante · Isabelle Guyon · Daniel Silver · Evelyne Viegas · Wei-Wei Tu -
2018 : Evaluating Causation Coefficients »
Isabelle Guyon -
2017 Workshop: Machine Learning Challenges as a Research Tool »
Isabelle Guyon · Evelyne Viegas · Sergio Escalera · Jacob D Abernethy -
2017 : Introduction - Isabelle Guyon and Evelyne Viegas »
Isabelle Guyon -
2016 Workshop: Machine Learning for Spatiotemporal Forecasting »
Florin Popescu · Sergio Escalera · Xavier Baró · Stephane Ayache · Isabelle Guyon -
2016 : Gaming challenges and encouraging collaborations »
Sergio Escalera · Isabelle Guyon -
2016 Workshop: Challenges in Machine Learning: Gaming and Education »
Isabelle Guyon · Evelyne Viegas · Balázs Kégl · Ben Hamner · Sergio Escalera -
2016 : Welcome »
Evelyne Viegas -
2016 Demonstration: Biometric applications of CNNs: get a job at "Impending Technologies"! »
Sergio Escalera · Isabelle Guyon · Baiyu Chen · Marc Quintana · Umut Güçlü · Yağmur Güçlütürk · Xavier Baró · Rob van Lier · Carlos Andujar · Marcel A. J. van Gerven · Bernhard E Boser · Luke Wang -
2016 Demonstration: Project Malmo - Minecraft for AI Research »
Katja Hofmann · Matthew A Johnson · Fernando Diaz · Alekh Agarwal · Tim Hutton · David Bignell · Evelyne Viegas -
2015 Workshop: Challenges in Machine Learning (CiML 2015): "Open Innovation" and "Coopetitions" »
Isabelle Guyon · Evelyne Viegas · Ben Hamner · Balázs Kégl -
2015 Demonstration: CodaLab Worksheets for Reproducible, Executable Papers »
Percy Liang · Evelyne Viegas -
2014 Workshop: High-energy particle physics, machine learning, and the HiggsML data challenge (HEPML) »
Glen Cowan · Balázs Kégl · Kyle Cranmer · Gábor Melis · Tim Salimans · Vladimir Vava Gligorov · Daniel Whiteson · Lester Mackey · Wojciech Kotlowski · Roberto Díaz Morales · Pierre Baldi · Cecile Germain · David Rousseau · Isabelle Guyon · Tianqi Chen -
2014 Workshop: Challenges in Machine Learning workshop (CiML 2014) »
Isabelle Guyon · Evelyne Viegas · Percy Liang · Olga Russakovsky · Rinat Sergeev · Gábor Melis · Michele Sebag · Gustavo Stolovitzky · Jaume Bacardit · Michael S Kim · Ben Hamner -
2013 Workshop: NIPS 2013 Workshop on Causality: Large-scale Experiment Design and Inference of Causal Mechanisms »
Isabelle Guyon · Leon Bottou · Bernhard Schölkopf · Alexander Statnikov · Evelyne Viegas · james m robins -
2012 Demonstration: Gesture recognition with Kinect »
Isabelle Guyon -
2009 Workshop: Clustering: Science or art? Towards principled approaches »
Margareta Ackerman · Shai Ben-David · Avrim Blum · Isabelle Guyon · Ulrike von Luxburg · Robert Williamson · Reza Zadeh -
2009 Mini Symposium: Causality and Time Series Analysis »
Florin Popescu · Isabelle Guyon · Guido Nolte -
2009 Demonstration: Causality Workbench »
Isabelle Guyon -
2008 Workshop: Causality: objectives and assessment »
Isabelle Guyon · Dominik Janzing · Bernhard Schölkopf -
2007 Demonstration: CLOP: a Matlab Learning Object Package »
Amir Reza Saffari Azar Alamdari · Isabelle Guyon · Hugo Jair Escalante · Gökhan H Bakir · Gavin Cawley -
2006 Workshop: Multi-level Inference Workshop and Model Selection Game »
Isabelle Guyon