Timezone: »
The focus of this workshop is the use of machine learning to help address climate change, encompassing mitigation efforts (reducing greenhouse gas emissions), adaptation measures (preparing for unavoidable consequences), and climate science (our understanding of the climate and future climate predictions). The scope of the workshop includes climate-relevant applications of machine learning to the power sector, buildings and transportation infrastructure, agriculture and land use, extreme event prediction, disaster response, climate policy, and climate finance. The goals of the workshop are: (1) to showcase high-impact applications of ML to climate change mitigation, adaptation, and climate science, (2) to showcase novel and interesting problem settings and challenges for ML techniques, (3) to encourage fruitful collaboration between the ML community and a diverse set of researchers and practitioners from climate change-related fields, and (4) to promote dialogue with decision-makers in the private and public sectors to ensure that the work presented leads to responsible and meaningful deployment.
Tue 6:15 a.m. - 6:30 a.m.
|
Opening Remarks
SlidesLive Video » |
🔗 |
Tue 6:30 a.m. - 7:15 a.m.
|
Daron Acemoglu: Is AI the Solution to Climate Change?
(
Keynote talk
)
SlidesLive Video » Title: Is AI the Solution to Climate Change? Abstract: Many pin their hopes on the tech sector and advances in Artificial Intelligence (AI) and other digital technologies in combating climate change. In this talk, I argue against this optimistic view. I first provide evidence on how AI has been used in US businesses, pointing out that it has had few of the promised benefits and has instead continued the process of inequality-increasing and wage-reducing automation. I then review the evidence on advances in renewable energy, arguing that it has responded strongly to subsidies and prices of fossil fuels, but these advances have slowed down lately. There appears to be no alternative to increasing carbon taxes significantly and investing in renewable and green technologies. There is no evidence that big tech companies have played a leading role, and most existing evidence suggests that big energy companies have typically undermined efforts to switch to renewables. Bio: Daron Acemoglu is an Institute Professor at MIT and an elected fellow of the National Academy of Sciences, American Philosophical Society, the British Academy of Sciences, the Turkish Academy of Sciences, the American Academy of Arts and Sciences, the Econometric Society, the European Economic Association, and the Society of Labor Economists. He is also a member of the Group of Thirty. He is the author of five books, including New York Times bestseller Why Nations Fail: Power, Prosperity, and Poverty (joint with James A. Robinson), Introduction to Modern Economic Growth, and The Narrow Corridor: States, Societies, and the Fate of Liberty (with James A. Robinson). His academic work covers a wide range of areas, including political economy, economic development, economic growth, technological change, inequality, labor economics and economics of networks. Daron Acemoglu has received the inaugural T. W. Shultz Prize from the University of Chicago in 2004, and the inaugural Sherwin Rosen Award for outstanding contribution to labor economics in 2004, Distinguished Science Award from the Turkish Sciences Association in 2006, the John von Neumann Award, Rajk College, Budapest in 2007, the Carnegie Fellowship in 2017, the Jean-Jacques Laffont Prize in 2018, the Global Economy Prize in 2019, and the CME Mathematical and Statistical Research Institute prize in 2021. He was awarded the John Bates Clark Medal in 2005, the Erwin Plein Nemmers Prize in 2012, and the 2016 BBVA Frontiers of Knowledge Award. He holds Honorary Doctorates from the University of Utrecht, the Bosporus University, University of Athens, Bilkent University, the University of Bath, Ecole Normale Superieure, Saclay Paris, and the London Business School. |
Kamer Acemoglu 🔗 |
Tue 7:15 a.m. - 7:25 a.m.
|
Resolving Super Fine-Resolution SIF via Coarsely-Supervised U-Net Regression
(
Spotlight
)
link »
SlidesLive Video » Climate change presents challenges to crop productivity, such as increasing the likelihood of heat stress and drought. Solar-Induced Chlorophyll Fluorescence (SIF) is a powerful way to monitor how crop productivity and photosynthesis are affected by changing climatic conditions. However, satellite SIF observations are only available at a coarse spatial resolution (e.g. 3-5km) in most places, making it difficult to determine how individual crop types or farms are doing. This poses a challenging coarsely-supervised regression task; at training time, we only have access to SIF labels at a coarse resolution (3 km), yet we want to predict SIF at a very fine spatial resolution (30 meters), a 100x increase. We do have some fine-resolution input features (such as Landsat reflectance) that are correlated with SIF, but the nature of the correlation is unknown. To address this, we propose Coarsely-Supervised Regression U-Net (CSR-U-Net), a novel approach to train a U-Net for this coarse supervision setting. CSR-U-Net takes in a fine-resolution input image, and outputs a SIF prediction for each pixel; the average of the pixel predictions is trained to equal the true coarse-resolution SIF for the entire image. Even though this is a very weak form of supervision, CSR-U-Net can still learn to predict accurately, due to its inherent localization abilities, plus additional enhancements that facilitate the incorporation of scientific prior knowledge. CSR-U-Net can resolve fine-grained variations in SIF more accurately than existing averaging-based approaches, which ignore fine-resolution spatial variation during training. CSR-U-Net could also be useful for a wide range of "downscaling'" problems in climate science, such as increasing the resolution of global climate models. |
Joshua Fan · Di Chen · Jiaming Wen · Ying Sun · Carla Gomes 🔗 |
Tue 7:25 a.m. - 7:34 a.m.
|
Detecting Abandoned Oil Wells Using Machine Learning and Semantic Segmentation
(
Spotlight
)
link »
SlidesLive Video » Around the world, there are millions of unplugged abandoned oil and gas wells, leaking methane into the atmosphere. The locations of many of these wells, as well as their greenhouse gas emissions impacts, are unknown. Machine learning methods in computer vision and remote sensing, such as semantic segmentation, have made it possible to quickly analyze large amounts of satellite imagery to detect salient information. This project aims to automatically identify undocumented oil and gas wells in the province of Alberta, Canada to aid in documentation, estimation of emissions and maintenance of high-emitting wells. |
Michelle Lin · David Rolnick 🔗 |
Tue 7:34 a.m. - 7:38 a.m.
|
Semi-Supervised Classification and Segmentation on High Resolution Aerial Images
(
Spotlight
)
link »
SlidesLive Video » FloodNet is a high-resolution image dataset acquired by a small UAV platform, DJI Mavic Pro quadcopters, after Hurricane Harvey. The dataset presents a unique challenge of advancing the damage assessment process for post-disaster scenarios using unlabeled and limited labeled dataset. We propose a solution to address their classification and semantic segmentation challenge. We approach this problem by generating pseudo labels for both classification and segmentation during training and slowly incrementing the amount by which the pseudo label loss affects the final loss. Using this semi-supervised method of training helped us improve our baseline supervised loss by a huge margin for classification, allowing the model to generalize and perform better on the validation and test splits of the dataset. In this paper, we compare and contrast the various methods and models for image classification and semantic segmentation on the FloodNet dataset. |
Sahil Khose · Abhiraj Tiwari · Ankita Ghosh 🔗 |
Tue 7:38 a.m. - 7:46 a.m.
|
A GNN-RNN Approach for Harnessing Geospatial and Temporal Information: Application to Crop Yield Prediction
(
Spotlight
)
link »
SlidesLive Video » Climate change poses new challenges to agricultural production, as crop yields are extremely sensitive to climatic variation. Accurately predicting the effects of weather patterns on crop yield is crucial for addressing issues such as food insecurity, supply stability, and economic planning. Recently, there have been many attempts to use machine learning models for crop yield prediction. However, these models either restrict their tasks to a relatively small region or a short time-period (e.g. a few years), which makes them hard to generalize spatially and temporally. They also view each location as an i.i.d sample, ignoring spatial correlations in the data. In this paper, we introduce a novel graph-based recurrent neural network for crop yield prediction, which incorporates both geographical and temporal structure. Our method is trained, validated, and tested on over 2000 counties from 41 states in the US mainland, covering years from 1981 to 2019. As far as we know, this is the first machine learning method that embeds geographical knowledge in crop yield prediction and predicts crop yields at the county level nationwide. Experimental results show that our proposed method consistently outperforms a wide variety of existing state-of-the-art methods, validating the effectiveness of geospatial and temporal information. |
Joshua Fan · Junwen Bai · Zhiyun Li · Ariel Ortiz-Bobea · Carla Gomes 🔗 |
Tue 7:45 a.m. - 8:30 a.m.
|
Poster Session 1
(
Poster Session
)
Gather.Town Rooms:
Papers Track:
Proposals Track:
Tutorials Track: |
🔗 |
Tue 8:30 a.m. - 9:30 a.m.
|
Discussion Panel 1: Decision Making
(
Discussion Panel
)
SlidesLive Video » |
Lieve M.L Helsen · Lynn Kaack · João M. Costa Sousa · Eliane Ubalijoro 🔗 |
Tue 9:30 a.m. - 9:40 a.m.
|
Two-phase training mitigates class imbalance for camera trap image classification with CNNs
(
Spotlight
)
link »
SlidesLive Video » By leveraging deep learning to automatically classify camera trap images, ecologists can monitor biodiversity conservation efforts and the effects of climate change on ecosystems more efficiently. Due to the imbalanced class-distribution of camera trap datasets, current models are biased towards the majority classes. As a result, they obtain good performance for a few majority classes but poor performance for many minority classes. We used two-phase training to increase the performance for these minority classes. We trained, next to a baseline model, four models that implemented a different versions of two-phase training on a subset of the highly imbalanced Snapshot Serengeti dataset. Our results suggest that two-phase training can improve performance for many minority classes, with limited loss in performance for the other classes. We find that two-phase training based on majority undersampling increases class-specific F1-scores up to 3.0%. We also find that two-phase training outperforms using only oversampling or undersampling by 6.1% in F1-score on average. Finally, we find that a combination of over- and undersampling leads to a better performance than using them individually. |
Ruben Cartuyvels 🔗 |
Tue 9:40 a.m. - 9:49 a.m.
|
Predicting Atlantic Multidecadal Variability
(
Spotlight
)
link »
SlidesLive Video » Atlantic Multidecadal Variability (AMV) describes variations of North Atlantic sea surface temperature with a typical cycle of between 60 and 70 years. AMV strongly impacts local climate over North America and Europe, therefore prediction of AMV, especially the extreme values, is of great societal utility for understanding and responding to regional climate change. This work tests multiple machine learning models to improve the state of AMV prediction from maps of sea surface temperature, salinity, and sea level pressure in the North Atlantic region. We use data from the Community Earth System Model 1 Large Ensemble Project, a state-of-the-art climate model with 3,440 years of data. Our results demonstrate that all of the models we use outperform the traditional persistence forecast baseline. Predicting the AMV is important for identifying future extreme temperatures and precipitation as well as hurricane activity, in Europe and North America up to 25 years in advance. |
Glenn Liu · Peidong Wang · Matthew Beveridge · Young-Oh Kwon · Iddo Drori 🔗 |
Tue 9:49 a.m. - 9:59 a.m.
|
Learned Benchmarks for Subseasonal Forecasting
(
Spotlight
)
link »
SlidesLive Video » We develop a subseasonal forecasting toolkit of simple learned benchmark models that outperform both operational practice and state-of-the-art machine learning and deep learning methods. Our new models include (a) Climatology++, an adaptive alternative to climatology that, for precipitation, is 9% more accurate and 250\% more skillful than the United States operational Climate Forecasting System (CFSv2); (b) CFSv2++, a learned CFSv2 correction that improves temperature and precipitation accuracy by 7-8% and skill by 50-275%; and (c) Persistence++, an augmented persistence model that combines CFSv2 forecasts with lagged measurements to improve temperature and precipitation accuracy by 6-9% and skill by 40-130%. Across the contiguous U.S., these models consistently outperform standard meteorological baselines, state-of-the-art learning methods, and the European Centre for Medium-Range Weather Forecasts ensemble. Overall, we find that augmenting traditional forecasting approaches with learned enhancements yields an effective and computationally inexpensive strategy for building the next generation of subseasonal forecasting benchmarks. |
Soukayna Mouatadid · Paulo Orenstein · Genevieve Flaspohler · Miruna Oprescu · Judah Cohen · Franklyn Wang · Sean Knight · Maria Geogdzhayeva · Sam Levang · Ernest Fraenkel · Lester Mackey
|
Tue 10:00 a.m. - 10:45 a.m.
|
Anima Anandkumar: Role of AI in predicting and mitigating climate change
(
Keynote talk
)
SlidesLive Video » Title: Role of AI in predicting and mitigating climate change Abstract: Predicting extreme weather events in a warming world at fine scales is a grand challenge faced by climate scientists. Policy makers and society at large depend on reliable predictions to plan for the disastrous impact of climate change and develop effective adaptation strategies. Deep learning (DL) offers novel methods that are potentially more accurate and orders of magnitude faster than traditional weather and climate models for predicting extreme events. The Fourier Neural Operator (FNO), a novel deep learning method has shown promising results for predicting complex systems, such as spatio-temporal chaos, turbulence, and weather phenomena. I will give an overview of the method as well as our recent results. Bio: Anima Anandkumar is a Bren Professor at Caltech and Director of ML Research at NVIDIA. She was previously a Principal Scientist at Amazon Web Services. She has received several honors such as Alfred. P. Sloan Fellowship, NSF Career Award, Young investigator awards from DoD, and Faculty Fellowships from Microsoft, Google, Facebook, and Adobe. She is part of the World Economic Forum's Expert Network. She is passionate about designing principled AI algorithms and applying them in interdisciplinary applications. Her research focus is on unsupervised AI, optimization, and tensor methods. |
Anima Anandkumar 🔗 |
Tue 10:45 a.m. - 10:55 a.m.
|
ClimART: A Benchmark Dataset for Emulating Atmospheric Radiative Transfer in Weather and Climate Models
(
Spotlight
)
link »
SlidesLive Video » Numerical simulations of Earth's weather and climate require substantial amounts of computation. This has led to a growing interest in replacing subroutines that explicitly compute physical processes with approximate machine learning (ML) methods that are fast at inference time. Within weather and climate models, atmospheric radiative transfer (RT) calculations are especially expensive. This has made them a popular target for neural network-based emulators. However, prior work is hard to compare due to the lack of a comprehensive dataset and standardized best practices for ML benchmarking. To fill this gap, we introduce the \climart dataset, with more than \emph{10 million samples from present, pre-industrial, and future climate conditions}. ClimART poses several methodological challenges for the ML community, such as multiple out-of-distribution test sets, underlying domain physics, and a trade-off between accuracy and inference speed. We also present several novel baselines that indicate shortcomings of the datasets and network architectures used in prior work. |
Salva Rühling Cachay · Venkatesh Ramesh · Jason N. S. Cole · Howard Barker · David Rolnick 🔗 |
Tue 10:55 a.m. - 11:05 a.m.
|
DeepQuake: Artificial Intelligence for Earthquake Forecasting Using Fine-Grained Climate Data
(
Spotlight
)
link »
SlidesLive Video » Earthquakes are one of the most catastrophic natural disasters, making accurate, fine-grained, and real-time earthquake forecasting extremely important for the safety and security of human lives. In this work, we propose DeepQuake, a hybrid physics and deep learning model for fine-grained earthquake forecasting using time-series data of the horizontal displacement of earth’s surface measured from continuously operating Global Positioning System (cGPS) data. Recent studies using cGPS data have established a link between transient deformation within earth's crust to climate variables. DeepQuake’s physics-based pre-processing algorithm extracts relevant features including the x, y, and xy components of strain in earth’s crust, capturing earth’s elastic response to these climate variables, and feeds it into a deep learning neural network to predict key earthquake variables such as the time, location, magnitude, and depth of a future earthquake. Results across California show promising correlations between cGPS derived strain patterns and the earthquake catalog ground truth for a given location and time. |
Yash Narayan 🔗 |
Tue 11:05 a.m. - 11:18 a.m.
|
Hurricane Forecasting: A Novel Multimodal Machine Learning Framework
(
Spotlight
)
link »
SlidesLive Video » This paper describes a machine learning (ML) framework for tropical cyclone intensity and track forecasting, combining multiple distinct ML techniques and utilizing diverse data sources. Our framework, which we refer to as Hurricast (HURR), is built upon the combination of distinct data processing techniques using gradient-boosted trees and novel encoder-decoder architectures, including CNN, GRU and Transformers components. We propose a deep-learning feature extractor methodology to mix spatial-temporal data with statistical data efficiently. Our multimodal framework unleashes the potential of making forecasts based on a wide range of data sources, including historical storm data, and visual data such as reanalysis atmospheric images. We evaluate our models with current operational forecasts in North Atlantic (NA) and Eastern Pacific (EP) basins on 2016-2019 for 24-hour lead time, and show our models consistently outperform statistical-dynamical models and compete with the best dynamical models. Furthermore, the inclusion of Hurricast into an operational forecast consensus model leads to a significant improvement of 5% - 15% over NHC's official forecast, thus highlighting the complementary properties with existing approaches. |
Léonard Boussioux · Cynthia Zeng · Dimitris Bertsimas · Théo Guenais 🔗 |
Tue 11:20 a.m. - 11:30 a.m.
|
Emissions-aware electricity network expansion planning via implicit differentiation
(
Spotlight
)
link »
SlidesLive Video » We consider a variant of the classical problem of designing or expanding an electricity network. Instead of minimizing only investment and production costs, however, we seek to minimize some mixture of cost and greenhouse gas emissions, even if the underlying dispatch model does not tax emissions. This enables grid planners to directly minimize consumption-based emissions, when expanding or modifying the grid, regardless of whether or not the carbon market incorporates a carbon tax. We solve this problem using gradient descent with implicit differentiation, a technique recently popularized in machine learning. To demonstrate the method, we optimize transmission and storage resources on the IEEE 14-bus test network and compare our solution to one generated by standard planning with a carbon tax. Our solution significantly reduces emissions for the same levelized cost of electricity. |
Anthony Degleris · Lucas Fuentes · Abbas El Gamal · Ram Rajagopal 🔗 |
Tue 11:30 a.m. - 12:15 p.m.
|
Tianzhen Hong: Machine Learning for Smart Buildings: Applications and Perspectives
(
Keynote talk
)
SlidesLive Video » Title: Machine Learning for Smart Buildings: Applications and Perspectives Abstract: Fueled by big data, powerful computing, and advanced algorithms, machine learning has been explored and applied to smart buildings and has demonstrated its potential to enhance building performance. This talk presents an overview of how machine learning has been applied across different stages of building life cycle with a focus on building design, operation, and control. A few applications using machine learning will be presented. Challenges and opportunities of applying machine learning to buildings research will be discussed also. Bio: Dr. Tianzhen Hong is a Senior Scientist and Deputy Head of the Building Technologies Department of LBNL. He leads the Urban Systems Group and a team with research on data, methods, computing, occupant behavior, and policy for design and operation of low energy buildings and sustainable urban systems. He is an IBPSA Fellow and ASHRAE Fellow. He received B.Eng. and Ph.D. in HVACR, and B.Sc. in Applied Mathematics from Tsinghua University, China. |
Tianzhen Hong 🔗 |
Tue 12:15 p.m. - 1:00 p.m.
|
Poster Session 2
(
Poster Session
)
Gather.Town rooms:
Papers Track:
Proposals Track:
Tutorials Track: |
🔗 |
Tue 1:00 p.m. - 2:00 p.m.
|
Discussion Panel: Data
(
Discussion Panel
)
SlidesLive Video » This panel will be on the data and innovation gaps and opportunities needed to support large-scale projects at the intersection of climate change and AI, and what approaches should be taken, in different contexts (academia, industry, government, and intersections among them) to address the data challenges. |
Bethany Lusch · Kakani Katija · brookie guzder-williams · Tricia Martinez 🔗 |
Tue 2:00 p.m. - 2:10 p.m.
|
Tutorials track intro
(
Track intro
)
SlidesLive Video » |
🔗 |
Tue 2:10 p.m. - 2:20 p.m.
|
A day in a sustainable life
(
Tutorial
)
link »
SlidesLive Video » In this notebook, we show the reader how to use an electrical battery to minimize the operational carbon intensity of a building. The central idea is to charge the battery when the carbon intensity of the grid energy mix is low, and vice versa. The same methodology is used in practice to optimise for a number of different objective functions, including energy costs. Taking the hypothetical case of Pi, an eco-conscious and tech-savvy householder in the UK, we walk the reader through getting carbon intensity data, and how to use this with a number of different optimisation algorithms to decarbonise. Starting off with easy-to-understand, brute force search, we establish a baseline for subsequent (hopefully smarter) optimization algorithms. This should come naturally, since in their day job Pi is a data scientist where they often use grid and random search to tune hyperparameters of ML models. The second optimization algorithm we explore is a genetic algorithm, which belongs to the class of derivative free optimizers and is consequently extremely versatile. However, the flexibility of these algorithms comes at the cost of computational speed and effort. In many situations, it makes sense to utilize an optimization method which can make use of the special structure in the problem. As the final step, we see how Pi can optimally solve the problem of minimizing their carbon intensity by formulating it as a linear program. Along the way, we also keep an eye out for some of the most important challenges that arise in practice. |
Hussain Kazmi 🔗 |
Tue 2:20 p.m. - 2:30 p.m.
|
Open Catalyst Project: An Introduction to ML applied to Molecular Simulations
(
Tutorial
)
link »
SlidesLive Video » As the world continues to battle energy scarcity and climate change, the future of our energy infrastructure is a growing challenge. Renewable energy technologies offer the opportunity to drive efficient carbon-neutral means for energy storage and generation. Doing so, however, requires the discovery of efficient and economic catalysts (materials) to accelerate associated chemical processes. A common approach in discovering high performance catalysts is using molecular simulations. Specifically, each simulation models the interaction of a catalyst surface with molecules that are commonly seen in electrochemical reactions. By predicting these interactions accurately, the catalyst's impact on the overall rate of a chemical reaction may be estimated. The Open Catalyst Project (OCP) aims to develop new ML methods and models to accelerate the catalyst simulation process for renewable energy technologies and improve our ability to predict properties across catalyst composition. The initial release of the Open Catalyst 2020 (OC20) dataset presented the largest open dataset of molecular combinations, spanning 55 unique elements and over 130M+ data points. We will present a comprehensive tutorial of the Open Catalyst Project repository, including (1) Accessing & visualizing the dataset, (2) Overview of the various tasks, (3) Training graph neural network (GNN) models, (4) Developing your own model for OCP, (5) Running ML-driven simulations, and (6) Visualizing the results. Primary tools include PyTorch and PyTorch Geometric. No background in chemistry is assumed. Following this tutorial we hope to better equip attendees with a basic understanding of the data and repository. |
Muhammed Shuaibi 🔗 |
Tue 2:30 p.m. - 3:15 p.m.
|
Amy McGovern: Developing Trustworthy AI for Weather and Climate
(
Keynote talk
)
link »
SlidesLive Video » Title: Developing Trustworthy AI for Weather and Climate Abstract: In this talk we give an overview of the work of the NSF AI Institute for Research on Trustworthy AI in Weather, Climate, and Coastal Oceanography. For this talk, we will focus on our applications to weather and climate predictions, including convective hazards and extreme heat. We also briefly review the need for the development of principles for ethical and responsible AI for weather and climate. Bio: Lloyd G. and Joyce Austin Presidential Professor, School of Computer Science and School of Meteorology Director, NSF AI Institute for Research on Trustworthy AI in Weather, Climate, and Coastal Oceanography University of Oklahoma |
Amy McGovern 🔗 |
Tue 3:15 p.m. - 3:30 p.m.
|
Closing remarks and awards
(
Closing remarks
)
SlidesLive Video » |
🔗 |
Tue 3:30 p.m. - 4:15 p.m.
|
Poster Session 3
(
Poster Session
)
Gather.Town Rooms:
Papers Track:
Proposals Track: |
🔗 |
Tue 4:15 p.m. - 5:00 p.m.
|
Gather.Town networking
(
Networking
)
|
🔗 |
Author Information
Maria João Sousa (IDMEC, Instituto Superior Técnico, Universidade de Lisboa)
Hari Prasanna Das (University of California Berkeley)
Sally Simone Fobi (Columbia University)
Jan Drgona (Pacific Northwest National Laboratory)
I am a data scientist in the Physics and Computational Sciences Division (PCSD) at Pacific Northwest National Laboratory, Richland, WA. My current research interests fall in the intersection of model-based optimal control, constrained optimization, and machine learning.
Tegan Maharaj (MILA)
Yoshua Bengio (Mila / U. Montreal)
Yoshua Bengio is Full Professor in the computer science and operations research department at U. Montreal, scientific director and founder of Mila and of IVADO, Turing Award 2018 recipient, Canada Research Chair in Statistical Learning Algorithms, as well as a Canada AI CIFAR Chair. He pioneered deep learning and has been getting the most citations per day in 2018 among all computer scientists, worldwide. He is an officer of the Order of Canada, member of the Royal Society of Canada, was awarded the Killam Prize, the Marie-Victorin Prize and the Radio-Canada Scientist of the year in 2017, and he is a member of the NeurIPS advisory board and co-founder of the ICLR conference, as well as program director of the CIFAR program on Learning in Machines and Brains. His goal is to contribute to uncover the principles giving rise to intelligence through learning, as well as favour the development of AI for the benefit of all.
More from the Same Authors
-
2020 : Physics-constrained Deep Recurrent Neural Models of Building Thermal Dynamics »
Jan Drgona -
2020 : Expert-in-the-loop Systems Towards Safety-critical Machine Learning Technology in Wildfire Intelligence »
Maria João Sousa -
2020 : Do Occupants in a Building exhibit patterns in Energy Consumption? Analyzing Clusters in Energy Social Games »
Hari Prasanna Das -
2021 Spotlight: Invariance Principle Meets Information Bottleneck for Out-of-Distribution Generalization »
Kartik Ahuja · Ethan Caballero · Dinghuai Zhang · Jean-Christophe Gagnon-Audet · Yoshua Bengio · Ioannis Mitliagkas · Irina Rish -
2021 : Systematic Evaluation of Causal Discovery in Visual Model Based Reinforcement Learning »
Nan Rosemary Ke · Aniket Didolkar · Sarthak Mittal · Anirudh Goyal · Guillaume Lajoie · Stefan Bauer · Danilo Jimenez Rezende · Yoshua Bengio · Chris Pal · Michael Mozer -
2021 : Long-Term Credit Assignment via Model-based Temporal Shortcuts »
Michel Ma · Pierluca D'Oro · Yoshua Bengio · Pierre-Luc Bacon -
2021 : A Consciousness-Inspired Planning Agent for Model-Based Reinforcement Learning »
Mingde Zhao · Zhen Liu · Sitao Luan · Shuyuan Zhang · Doina Precup · Yoshua Bengio -
2021 : Effect of diversity in Meta-Learning »
Ramnath Kumar · Tristan Deleu · Yoshua Bengio -
2021 : Learning Neural Causal Models with Active Interventions »
Nino Scherrer · Olexa Bilaniuk · Yashas Annadani · Anirudh Goyal · Patrick Schwab · Bernhard Schölkopf · Michael Mozer · Yoshua Bengio · Stefan Bauer · Nan Rosemary Ke -
2021 : Multi-Domain Balanced Sampling Improves Out-of-Distribution Generalization of Chest X-ray Pathology Prediction Models »
Enoch Tetteh · David Krueger · Joseph Paul Cohen · Yoshua Bengio -
2022 Poster: Discrete Compositional Representations as an Abstraction for Goal Conditioned Reinforcement Learning »
Riashat Islam · Hongyu Zang · Anirudh Goyal · Alex Lamb · Kenji Kawaguchi · Xin Li · Romain Laroche · Yoshua Bengio · Remi Tachet des Combes -
2022 : Posterior samples of source galaxies in strong gravitational lenses with score-based priors »
Alexandre Adam · Adam Coogan · Nikolay Malkin · Ronan Legin · Laurence Perreault-Levasseur · Yashar Hezaveh · Yoshua Bengio -
2022 : Designing Biological Sequences via Meta-Reinforcement Learning and Bayesian Optimization »
Leo Feng · Padideh Nouri · Aneri Muni · Yoshua Bengio · Pierre-Luc Bacon -
2022 : Bayesian Dynamic Causal Discovery »
Alexander Tong · Lazar Atanackovic · Jason Hartford · Yoshua Bengio -
2022 : Object-centric causal representation learning »
Amin Mansouri · Jason Hartford · Kartik Ahuja · Yoshua Bengio -
2022 : Equivariance with Learned Canonical Mappings »
Oumar Kaba · Arnab Mondal · Yan Zhang · Yoshua Bengio · Siamak Ravanbakhsh -
2022 : Interventional Causal Representation Learning »
Kartik Ahuja · Yixin Wang · Divyat Mahajan · Yoshua Bengio -
2022 : Multi-Objective GFlowNets »
Moksh Jain · Sharath Chandra Raparthy · Alex Hernandez-Garcia · Jarrid Rector-Brooks · Yoshua Bengio · Santiago Miret · Emmanuel Bengio -
2022 : PhAST: Physics-Aware, Scalable, and Task-specific GNNs for accelerated catalyst design »
ALEXANDRE DUVAL · Victor Schmidt · Alex Hernandez-Garcia · Santiago Miret · Yoshua Bengio · David Rolnick -
2022 : Efficient Queries Transformer Neural Processes »
Leo Feng · Hossein Hajimirsadeghi · Yoshua Bengio · Mohamed Osama Ahmed -
2022 : Rethinking Learning Dynamics in RL using Adversarial Networks »
Ramnath Kumar · Tristan Deleu · Yoshua Bengio -
2022 : Consistent Training via Energy-Based GFlowNets for Modeling Discrete Joint Distributions »
Chanakya Ekbote · Moksh Jain · Payel Das · Yoshua Bengio -
2022 : A General-Purpose Neural Architecture for Geospatial Systems »
Martin Weiss · Nasim Rahaman · Frederik Träuble · Francesco Locatello · Alexandre Lacoste · Yoshua Bengio · Erran Li Li · Chris Pal · Bernhard Schölkopf -
2022 : Interventional Causal Representation Learning »
Kartik Ahuja · Yixin Wang · Divyat Mahajan · Yoshua Bengio -
2022 : Interventional Causal Representation Learning »
Kartik Ahuja · Yixin Wang · Divyat Mahajan · Yoshua Bengio -
2022 Workshop: Tackling Climate Change with Machine Learning »
Peetak Mitra · Maria João Sousa · Mark Roth · Jan Drgona · Emma Strubell · Yoshua Bengio -
2022 Competition: The CityLearn Challenge 2022 »
Zoltan Nagy · Kingsley Nweye · Sharada Mohanty · Siva Sankaranarayanan · Jan Drgona · Tianzhen Hong · Sourav Dey · Gregor Henze -
2022 Spotlight: Lightning Talks 2A-4 »
Sarthak Mittal · Richard Grumitt · Zuoyu Yan · Lihao Wang · Dongsheng Wang · Alexander Korotin · Jiangxin Sun · Ankit Gupta · Vage Egiazarian · Tengfei Ma · Yi Zhou · Yishi Xu · Albert Gu · Biwei Dai · Chunyu Wang · Yoshua Bengio · Uros Seljak · Miaoge Li · Guillaume Lajoie · Yiqun Wang · Liangcai Gao · Lingxiao Li · Jonathan Berant · Huang Hu · Xiaoqing Zheng · Zhibin Duan · Hanjiang Lai · Evgeny Burnaev · Zhi Tang · Zhi Jin · Xuanjing Huang · Chaojie Wang · Yusu Wang · Jian-Fang Hu · Bo Chen · Chao Chen · Hao Zhou · Mingyuan Zhou -
2022 Spotlight: Is a Modular Architecture Enough? »
Sarthak Mittal · Yoshua Bengio · Guillaume Lajoie -
2022 : Equivariance with Learned Canonical Mappings »
Oumar Kaba · Arnab Mondal · Yan Zhang · Yoshua Bengio · Siamak Ravanbakhsh -
2022 : Invited Keynote 1 »
Yoshua Bengio -
2022 : FL Games: A Federated Learning Framework for Distribution Shifts »
Sharut Gupta · Kartik Ahuja · Mohammad Havaei · Niladri Chatterjee · Yoshua Bengio -
2022 : Panel Discussion »
Cheng Zhang · Mihaela van der Schaar · Ilya Shpitser · Aapo Hyvarinen · Yoshua Bengio · Bernhard Schölkopf -
2022 Workshop: AI for Science: Progress and Promises »
Yi Ding · Yuanqi Du · Tianfan Fu · Hanchen Wang · Anima Anandkumar · Yoshua Bengio · Anthony Gitter · Carla Gomes · Aviv Regev · Max Welling · Marinka Zitnik -
2022 Poster: Controlled Sparsity via Constrained Optimization or: How I Learned to Stop Tuning Penalties and Love Constraints »
Jose Gallego-Posada · Juan Ramirez · Akram Erraqabi · Yoshua Bengio · Simon Lacoste-Julien -
2022 Poster: MAgNet: Mesh Agnostic Neural PDE Solver »
Oussama Boussif · Yoshua Bengio · Loubna Benabbou · Dan Assouline -
2022 Poster: Neural Attentive Circuits »
Martin Weiss · Nasim Rahaman · Francesco Locatello · Chris Pal · Yoshua Bengio · Bernhard Schölkopf · Erran Li Li · Nicolas Ballas -
2022 Poster: Weakly Supervised Representation Learning with Sparse Perturbations »
Kartik Ahuja · Jason Hartford · Yoshua Bengio -
2022 Poster: Trajectory balance: Improved credit assignment in GFlowNets »
Nikolay Malkin · Moksh Jain · Emmanuel Bengio · Chen Sun · Yoshua Bengio -
2022 Poster: Temporal Latent Bottleneck: Synthesis of Fast and Slow Processing Mechanisms in Sequence Learning »
Aniket Didolkar · Kshitij Gupta · Anirudh Goyal · Nitesh Bharadwaj Gundavarapu · Alex Lamb · Nan Rosemary Ke · Yoshua Bengio -
2022 Poster: Is a Modular Architecture Enough? »
Sarthak Mittal · Yoshua Bengio · Guillaume Lajoie -
2022 : Keynote talk: A Deep Learning Journey »
Yoshua Bengio -
2021 : Live Q&A Session 2 with Susan Athey, Yoshua Bengio, Sujeeth Bharadwaj, Jane Wang, Joshua Vogelstein, Weiwei Yang »
Susan Athey · Yoshua Bengio · Sujeeth Bharadwaj · Jane Wang · Weiwei Yang · Joshua T Vogelstein -
2021 : Live Q&A Session 1 with Yoshua Bengio, Leyla Isik, Konrad Kording, Bernhard Scholkopf, Amit Sharma, Joshua Vogelstein, Weiwei Yang »
Yoshua Bengio · Leyla Isik · Konrad Kording · Bernhard Schölkopf · Joshua T Vogelstein · Weiwei Yang -
2021 : General Discussion 1 - What is out of distribution (OOD) generalization and why is it important? with Yoshua Bengio, Leyla Isik, Max Welling »
Yoshua Bengio · Leyla Isik · Max Welling · Joshua T Vogelstein · Weiwei Yang -
2021 Workshop: Machine Learning for the Developing World (ML4D): Global Challenges »
Paula Rodriguez Diaz · Konstantin Klemmer · Sally Simone Fobi · Oluwafemi Azeez · Niveditha Kalavakonda · Aya Salama · Tejumade Afonja -
2021 : AI X Discovery »
Yoshua Bengio -
2021 : Differentiable Parametric Optimization Approach to Power System Load Modeling »
Jan Drgona · Andrew August · Elliott Skomski -
2021 : Neural Differentiable Predictive Control »
Jan Drgona · Aaron Tuor · Draguna Vrabie -
2021 : Panel Discussion 2 »
Susan L Epstein · Yoshua Bengio · Lucina Uddin · Rohan Paul · Steve Fleming -
2021 : Desiderata and ML Research Programme for Higher-Level Cognition »
Yoshua Bengio -
2021 Workshop: Causal Inference & Machine Learning: Why now? »
Elias Bareinboim · Bernhard Schölkopf · Terrence Sejnowski · Yoshua Bengio · Judea Pearl -
2021 Poster: Dynamic Inference with Neural Interpreters »
Nasim Rahaman · Muhammad Waleed Gondal · Shruti Joshi · Peter Gehler · Yoshua Bengio · Francesco Locatello · Bernhard Schölkopf -
2021 Poster: Gradient Starvation: A Learning Proclivity in Neural Networks »
Mohammad Pezeshki · Oumar Kaba · Yoshua Bengio · Aaron Courville · Doina Precup · Guillaume Lajoie -
2021 Poster: A Consciousness-Inspired Planning Agent for Model-Based Reinforcement Learning »
Mingde Zhao · Zhen Liu · Sitao Luan · Shuyuan Zhang · Doina Precup · Yoshua Bengio -
2021 Poster: On the Stochastic Stability of Deep Markov Models »
Jan Drgona · Sayak Mukherjee · Jiaxin Zhang · Frank Liu · Mahantesh Halappanavar -
2021 Poster: Neural Production Systems »
Anirudh Goyal · Aniket Didolkar · Nan Rosemary Ke · Charles Blundell · Philippe Beaudoin · Nicolas Heess · Michael Mozer · Yoshua Bengio -
2021 Poster: Flow Network based Generative Models for Non-Iterative Diverse Candidate Generation »
Emmanuel Bengio · Moksh Jain · Maksym Korablyov · Doina Precup · Yoshua Bengio -
2021 Poster: The Causal-Neural Connection: Expressiveness, Learnability, and Inference »
Kevin Xia · Kai-Zhan Lee · Yoshua Bengio · Elias Bareinboim -
2021 Poster: Invariance Principle Meets Information Bottleneck for Out-of-Distribution Generalization »
Kartik Ahuja · Ethan Caballero · Dinghuai Zhang · Jean-Christophe Gagnon-Audet · Yoshua Bengio · Ioannis Mitliagkas · Irina Rish -
2021 Poster: Discrete-Valued Neural Communication »
Dianbo Liu · Alex Lamb · Kenji Kawaguchi · Anirudh Goyal · Chen Sun · Michael Mozer · Yoshua Bengio -
2020 : Panel discussion 2 »
Danielle S Bassett · Yoshua Bengio · Cristina Savin · David Duvenaud · Anna Choromanska · Yanping Huang -
2020 : Invited Talk Yoshua Bengio »
Yoshua Bengio -
2020 : Invited Talk #7 »
Yoshua Bengio -
2020 : Panel #1 »
Yoshua Bengio · Daniel Kahneman · Henry Kautz · Luis Lamb · Gary Marcus · Francesca Rossi -
2020 : Yoshua Bengio - Incentives for Researchers »
Yoshua Bengio -
2020 Workshop: Tackling Climate Change with ML »
David Dao · Evan Sherwin · Priya Donti · Lauren Kuntz · Lynn Kaack · Yumna Yusuf · David Rolnick · Catherine Nakalembe · Claire Monteleoni · Yoshua Bengio -
2020 Poster: Untangling tradeoffs between recurrence and self-attention in artificial neural networks »
Giancarlo Kerg · Bhargav Kanuparthi · Anirudh Goyal · Kyle Goyette · Yoshua Bengio · Guillaume Lajoie -
2020 Poster: Your GAN is Secretly an Energy-based Model and You Should Use Discriminator Driven Latent Sampling »
Tong Che · Ruixiang ZHANG · Jascha Sohl-Dickstein · Hugo Larochelle · Liam Paull · Yuan Cao · Yoshua Bengio -
2020 Poster: Hybrid Models for Learning to Branch »
Prateek Gupta · Maxime Gasse · Elias Khalil · Pawan K Mudigonda · Andrea Lodi · Yoshua Bengio -
2019 : Panel Session: A new hope for neuroscience »
Yoshua Bengio · Blake Richards · Timothy Lillicrap · Ila Fiete · David Sussillo · Doina Precup · Konrad Kording · Surya Ganguli -
2019 : Yoshua Bengio - Towards compositional understanding of the world by agent-based deep learning »
Yoshua Bengio -
2019 : Lunch Break and Posters »
Xingyou Song · Elad Hoffer · Wei-Cheng Chang · Jeremy Cohen · Jyoti Islam · Yaniv Blumenfeld · Andreas Madsen · Jonathan Frankle · Sebastian Goldt · Satrajit Chatterjee · Abhishek Panigrahi · Alex Renda · Brian Bartoldson · Israel Birhane · Aristide Baratin · Niladri Chatterji · Roman Novak · Jessica Forde · YiDing Jiang · Yilun Du · Linara Adilova · Michael Kamp · Berry Weinstein · Itay Hubara · Tal Ben-Nun · Torsten Hoefler · Daniel Soudry · Hsiang-Fu Yu · Kai Zhong · Yiming Yang · Inderjit Dhillon · Jaime Carbonell · Yanqing Zhang · Dar Gilboa · Johannes Brandstetter · Alexander R Johansen · Gintare Karolina Dziugaite · Raghav Somani · Ari Morcos · Freddie Kalaitzis · Hanie Sedghi · Lechao Xiao · John Zech · Muqiao Yang · Simran Kaur · Qianli Ma · Yao-Hung Hubert Tsai · Ruslan Salakhutdinov · Sho Yaida · Zachary Lipton · Daniel Roy · Michael Carbin · Florent Krzakala · Lenka Zdeborová · Guy Gur-Ari · Ethan Dyer · Dilip Krishnan · Hossein Mobahi · Samy Bengio · Behnam Neyshabur · Praneeth Netrapalli · Kris Sankaran · Julien Cornebise · Yoshua Bengio · Vincent Michalski · Samira Ebrahimi Kahou · Md Rifat Arefin · Jiri Hron · Jaehoon Lee · Jascha Sohl-Dickstein · Samuel Schoenholz · David Schwab · Dongyu Li · Sang Keun Choe · Henning Petzka · Ashish Verma · Zhichao Lin · Cristian Sminchisescu -
2019 : Lunch + Poster Session »
Frederik Gerzer · Bill Yang Cai · Pieter-Jan Hoedt · Kelly Kochanski · Soo Kyung Kim · Yunsung Lee · Sunghyun Park · Sharon Zhou · Martin Gauch · Jonathan Wilson · Joyjit Chatterjee · Shamindra Shrotriya · Dimitri Papadimitriou · Christian Schön · Valentina Zantedeschi · Gabriella Baasch · Willem Waegeman · Gautier Cosne · Dara Farrell · Brendan Lucier · Letif Mones · Caleb Robinson · Tafara Chitsiga · Victor Kristof · Hari Prasanna Das · Yimeng Min · Alexandra Puchko · Alexandra Luccioni · Kyle Story · Jason Hickey · Yue Hu · Björn Lütjens · Zhecheng Wang · Renzhi Jing · Genevieve Flaspohler · Jingfan Wang · Saumya Sinha · Qinghu Tang · Armi Tiihonen · Ruben Glatt · Muge Komurcu · Jan Drgona · Juan Gomez-Romero · Ashish Kapoor · Dylan J Fitzpatrick · Alireza Rezvanifar · Adrian Albert · Olya (Olga) Irzak · Kara Lamb · Ankur Mahesh · Kiwan Maeng · Frederik Kratzert · Sorelle Friedler · Niccolo Dalmasso · Alex Robson · Lindiwe Malobola · Lucas Maystre · Yu-wen Lin · Surya Karthik Mukkavili · Brian Hutchinson · Alexandre Lacoste · Yanbing Wang · Zhengcheng Wang · Yinda Zhang · Victoria Preston · Jacob Pettit · Draguna Vrabie · Miguel Molina-Solana · Tonio Buonassisi · Andrew Annex · Tunai P Marques · Catalin Voss · Johannes Rausch · Max Evans -
2019 : Climate Change: A Grand Challenge for ML »
Yoshua Bengio · Carla Gomes · Andrew Ng · Jeff Dean · Lester Mackey -
2019 Workshop: Joint Workshop on AI for Social Good »
Fei Fang · Joseph Aylett-Bullock · Marc-Antoine Dilhac · Brian Green · natalie saltiel · Dhaval Adjodah · Jack Clark · Sean McGregor · Margaux Luck · Jonathan Penn · Tristan Sylvain · Geneviève Boucher · Sydney Swaine-Simon · Girmaw Abebe Tadesse · Myriam Côté · Anna Bethke · Yoshua Bengio -
2019 Workshop: Tackling Climate Change with ML »
David Rolnick · Priya Donti · Lynn Kaack · Alexandre Lacoste · Tegan Maharaj · Andrew Ng · John Platt · Jennifer Chayes · Yoshua Bengio -
2019 : Opening remarks »
Yoshua Bengio -
2019 : Approaches to Understanding AI »
Yoshua Bengio · Roel Dobbe · Madeleine Elish · Joshua Kroll · Jacob Metcalf · Jack Poulson -
2019 : Invited Talk »
Yoshua Bengio -
2019 Workshop: Retrospectives: A Venue for Self-Reflection in ML Research »
Ryan Lowe · Yoshua Bengio · Joelle Pineau · Michela Paganini · Jessica Forde · Shagun Sodhani · Abhishek Gupta · Joel Lehman · Peter Henderson · Kanika Madan · Koustuv Sinha · Xavier Bouthillier -
2019 Poster: How to Initialize your Network? Robust Initialization for WeightNorm & ResNets »
Devansh Arpit · Víctor Campos · Yoshua Bengio -
2019 Poster: Wasserstein Dependency Measure for Representation Learning »
Sherjil Ozair · Corey Lynch · Yoshua Bengio · Aaron van den Oord · Sergey Levine · Pierre Sermanet -
2019 Poster: Unsupervised State Representation Learning in Atari »
Ankesh Anand · Evan Racah · Sherjil Ozair · Yoshua Bengio · Marc-Alexandre Côté · R Devon Hjelm -
2019 Poster: Variational Temporal Abstraction »
Taesup Kim · Sungjin Ahn · Yoshua Bengio -
2019 Poster: Gradient based sample selection for online continual learning »
Rahaf Aljundi · Min Lin · Baptiste Goujaud · Yoshua Bengio -
2019 Poster: MelGAN: Generative Adversarial Networks for Conditional Waveform Synthesis »
Kundan Kumar · Rithesh Kumar · Thibault de Boissiere · Lucas Gestin · Wei Zhen Teoh · Jose Sotelo · Alexandre de Brébisson · Yoshua Bengio · Aaron Courville -
2019 Invited Talk: From System 1 Deep Learning to System 2 Deep Learning »
Yoshua Bengio -
2019 Poster: On Adversarial Mixup Resynthesis »
Christopher Beckham · Sina Honari · Alex Lamb · Vikas Verma · Farnoosh Ghadiri · R Devon Hjelm · Yoshua Bengio · Chris Pal -
2019 Poster: Updates of Equilibrium Prop Match Gradients of Backprop Through Time in an RNN with Static Input »
Maxence Ernoult · Julie Grollier · Damien Querlioz · Yoshua Bengio · Benjamin Scellier -
2019 Poster: Non-normal Recurrent Neural Network (nnRNN): learning long time dependencies while improving expressivity with transient dynamics »
Giancarlo Kerg · Kyle Goyette · Maximilian Puelma Touzel · Gauthier Gidel · Eugene Vorontsov · Yoshua Bengio · Guillaume Lajoie -
2019 Oral: Updates of Equilibrium Prop Match Gradients of Backprop Through Time in an RNN with Static Input »
Maxence Ernoult · Julie Grollier · Damien Querlioz · Yoshua Bengio · Benjamin Scellier -
2018 : Opening remarks »
Yoshua Bengio -
2018 Workshop: AI for social good »
Margaux Luck · Tristan Sylvain · Joseph Paul Cohen · Arsene Fansi Tchango · Valentine Goddard · Aurelie Helouis · Yoshua Bengio · Sam Greydanus · Cody Wild · Taras Kucherenko · Arya Farahi · Jonathan Penn · Sean McGregor · Mark Crowley · Abhishek Gupta · Kenny Chen · Myriam Côté · Rediet Abebe -
2018 Poster: Image-to-image translation for cross-domain disentanglement »
Abel Gonzalez-Garcia · Joost van de Weijer · Yoshua Bengio -
2018 Poster: MetaGAN: An Adversarial Approach to Few-Shot Learning »
Ruixiang ZHANG · Tong Che · Zoubin Ghahramani · Yoshua Bengio · Yangqiu Song -
2018 Poster: Bayesian Model-Agnostic Meta-Learning »
Jaesik Yoon · Taesup Kim · Ousmane Dia · Sungwoong Kim · Yoshua Bengio · Sungjin Ahn -
2018 Poster: Sparse Attentive Backtracking: Temporal Credit Assignment Through Reminding »
Nan Rosemary Ke · Anirudh Goyal · Olexa Bilaniuk · Jonathan Binas · Michael Mozer · Chris Pal · Yoshua Bengio -
2018 Spotlight: Sparse Attentive Backtracking: Temporal Credit Assignment Through Reminding »
Nan Rosemary Ke · Anirudh Goyal · Olexa Bilaniuk · Jonathan Binas · Michael Mozer · Chris Pal · Yoshua Bengio -
2018 Spotlight: Bayesian Model-Agnostic Meta-Learning »
Jaesik Yoon · Taesup Kim · Ousmane Dia · Sungwoong Kim · Yoshua Bengio · Sungjin Ahn -
2018 Poster: Dendritic cortical microcircuits approximate the backpropagation algorithm »
João Sacramento · Rui Ponte Costa · Yoshua Bengio · Walter Senn -
2018 Oral: Dendritic cortical microcircuits approximate the backpropagation algorithm »
João Sacramento · Rui Ponte Costa · Yoshua Bengio · Walter Senn -
2017 : Yoshua Bengio »
Yoshua Bengio -
2017 : From deep learning of disentangled representations to higher-level cognition »
Yoshua Bengio -
2017 : More Steps towards Biologically Plausible Backprop »
Yoshua Bengio -
2017 : A3T: Adversarially Augmented Adversarial Training »
Aristide Baratin · Simon Lacoste-Julien · Yoshua Bengio · Akram Erraqabi -
2017 : Competition III: The Conversational Intelligence Challenge »
Mikhail Burtsev · Ryan Lowe · Iulian Vlad Serban · Yoshua Bengio · Alexander Rudnicky · Alan W Black · Shrimai Prabhumoye · Artem Rodichev · Nikita Smetanin · Denis Fedorenko · CheongAn Lee · EUNMI HONG · Hwaran Lee · Geonmin Kim · Nicolas Gontier · Atsushi Saito · Andrey Gershfeld · Artem Burachenok -
2017 Poster: Variational Walkback: Learning a Transition Operator as a Stochastic Recurrent Net »
Anirudh Goyal · Nan Rosemary Ke · Surya Ganguli · Yoshua Bengio -
2017 Demonstration: A Deep Reinforcement Learning Chatbot »
Iulian Vlad Serban · Chinnadhurai Sankar · Mathieu Germain · Saizheng Zhang · Zhouhan Lin · Sandeep Subramanian · Taesup Kim · Michael Pieper · Sarath Chandar · Nan Rosemary Ke · Sai Rajeswar Mudumba · Alexandre de Brébisson · Jose Sotelo · Dendi A Suhubdy · Vincent Michalski · Joelle Pineau · Yoshua Bengio -
2017 Poster: GibbsNet: Iterative Adversarial Inference for Deep Graphical Models »
Alex Lamb · R Devon Hjelm · Yaroslav Ganin · Joseph Paul Cohen · Aaron Courville · Yoshua Bengio -
2017 Poster: Plan, Attend, Generate: Planning for Sequence-to-Sequence Models »
Caglar Gulcehre · Francis Dutil · Adam Trischler · Yoshua Bengio -
2017 Poster: Z-Forcing: Training Stochastic Recurrent Networks »
Anirudh Goyal · Alessandro Sordoni · Marc-Alexandre Côté · Nan Rosemary Ke · Yoshua Bengio -
2016 : Yoshua Bengio – Credit assignment: beyond backpropagation »
Yoshua Bengio -
2016 : From Brains to Bits and Back Again »
Yoshua Bengio · Terrence Sejnowski · Christos H Papadimitriou · Jakob H Macke · Demis Hassabis · Alyson Fletcher · Andreas Tolias · Jascha Sohl-Dickstein · Konrad P Koerding -
2016 : Yoshua Bengio : Toward Biologically Plausible Deep Learning »
Yoshua Bengio -
2016 : Panel on "Explainable AI" (Yoshua Bengio, Alessio Lomuscio, Gary Marcus, Stephen Muggleton, Michael Witbrock) »
Yoshua Bengio · Alessio Lomuscio · Gary Marcus · Stephen H Muggleton · Michael Witbrock -
2016 : Yoshua Bengio: From Training Low Precision Neural Nets to Training Analog Continuous-Time Machines »
Yoshua Bengio -
2016 Symposium: Deep Learning Symposium »
Yoshua Bengio · Yann LeCun · Navdeep Jaitly · Roger Grosse -
2016 Poster: Architectural Complexity Measures of Recurrent Neural Networks »
Saizheng Zhang · Yuhuai Wu · Tong Che · Zhouhan Lin · Roland Memisevic · Russ Salakhutdinov · Yoshua Bengio -
2016 Poster: Professor Forcing: A New Algorithm for Training Recurrent Networks »
Alex M Lamb · Anirudh Goyal · Ying Zhang · Saizheng Zhang · Aaron Courville · Yoshua Bengio -
2016 Poster: On Multiplicative Integration with Recurrent Neural Networks »
Yuhuai Wu · Saizheng Zhang · Ying Zhang · Yoshua Bengio · Russ Salakhutdinov -
2016 Poster: Binarized Neural Networks »
Itay Hubara · Matthieu Courbariaux · Daniel Soudry · Ran El-Yaniv · Yoshua Bengio -
2015 : RL for DL »
Yoshua Bengio -
2015 : Learning Representations for Unsupervised and Transfer Learning »
Yoshua Bengio -
2015 Symposium: Deep Learning Symposium »
Yoshua Bengio · Marc'Aurelio Ranzato · Honglak Lee · Max Welling · Andrew Y Ng -
2015 Poster: Attention-Based Models for Speech Recognition »
Jan K Chorowski · Dzmitry Bahdanau · Dmitriy Serdyuk · Kyunghyun Cho · Yoshua Bengio -
2015 Poster: Equilibrated adaptive learning rates for non-convex optimization »
Yann Dauphin · Harm de Vries · Yoshua Bengio -
2015 Spotlight: Equilibrated adaptive learning rates for non-convex optimization »
Yann Dauphin · Harm de Vries · Yoshua Bengio -
2015 Spotlight: Attention-Based Models for Speech Recognition »
Jan K Chorowski · Dzmitry Bahdanau · Dmitriy Serdyuk · Kyunghyun Cho · Yoshua Bengio -
2015 Poster: A Recurrent Latent Variable Model for Sequential Data »
Junyoung Chung · Kyle Kastner · Laurent Dinh · Kratarth Goel · Aaron Courville · Yoshua Bengio -
2015 Poster: BinaryConnect: Training Deep Neural Networks with binary weights during propagations »
Matthieu Courbariaux · Yoshua Bengio · Jean-Pierre David -
2015 Tutorial: Deep Learning »
Geoffrey E Hinton · Yoshua Bengio · Yann LeCun -
2014 Workshop: Second Workshop on Transfer and Multi-Task Learning: Theory meets Practice »
Urun Dogan · Tatiana Tommasi · Yoshua Bengio · Francesco Orabona · Marius Kloft · Andres Munoz · Gunnar Rätsch · Hal Daumé III · Mehryar Mohri · Xuezhi Wang · Daniel Hernández-lobato · Song Liu · Thomas Unterthiner · Pascal Germain · Vinay P Namboodiri · Michael Goetz · Christopher Berlind · Sigurd Spieckermann · Marta Soare · Yujia Li · Vitaly Kuznetsov · Wenzhao Lian · Daniele Calandriello · Emilie Morvant -
2014 Workshop: Deep Learning and Representation Learning »
Andrew Y Ng · Yoshua Bengio · Adam Coates · Roland Memisevic · Sharanyan Chetlur · Geoffrey E Hinton · Shamim Nemati · Bryan Catanzaro · Surya Ganguli · Herbert Jaeger · Phil Blunsom · Leon Bottou · Volodymyr Mnih · Chen-Yu Lee · Rich M Schwartz -
2014 Workshop: OPT2014: Optimization for Machine Learning »
Zaid Harchaoui · Suvrit Sra · Alekh Agarwal · Martin Jaggi · Miro Dudik · Aaditya Ramdas · Jean Lasserre · Yoshua Bengio · Amir Beck -
2014 Poster: How transferable are features in deep neural networks? »
Jason Yosinski · Jeff Clune · Yoshua Bengio · Hod Lipson -
2014 Poster: Identifying and attacking the saddle point problem in high-dimensional non-convex optimization »
Yann N Dauphin · Razvan Pascanu · Caglar Gulcehre · Kyunghyun Cho · Surya Ganguli · Yoshua Bengio -
2014 Poster: Generative Adversarial Nets »
Ian Goodfellow · Jean Pouget-Abadie · Mehdi Mirza · Bing Xu · David Warde-Farley · Sherjil Ozair · Aaron Courville · Yoshua Bengio -
2014 Poster: On the Number of Linear Regions of Deep Neural Networks »
Guido F Montufar · Razvan Pascanu · Kyunghyun Cho · Yoshua Bengio -
2014 Demonstration: Neural Machine Translation »
Bart van Merriënboer · Kyunghyun Cho · Dzmitry Bahdanau · Yoshua Bengio -
2014 Oral: How transferable are features in deep neural networks? »
Jason Yosinski · Jeff Clune · Yoshua Bengio · Hod Lipson -
2014 Poster: Iterative Neural Autoregressive Distribution Estimator NADE-k »
Tapani Raiko · Yao Li · Kyunghyun Cho · Yoshua Bengio -
2013 Workshop: Deep Learning »
Yoshua Bengio · Hugo Larochelle · Russ Salakhutdinov · Tomas Mikolov · Matthew D Zeiler · David Mcallester · Nando de Freitas · Josh Tenenbaum · Jian Zhou · Volodymyr Mnih -
2013 Workshop: Output Representation Learning »
Yuhong Guo · Dale Schuurmans · Richard Zemel · Samy Bengio · Yoshua Bengio · Li Deng · Dan Roth · Kilian Q Weinberger · Jason Weston · Kihyuk Sohn · Florent Perronnin · Gabriel Synnaeve · Pablo R Strasser · julien audiffren · Carlo Ciliberto · Dan Goldwasser -
2013 Poster: Multi-Prediction Deep Boltzmann Machines »
Ian Goodfellow · Mehdi Mirza · Aaron Courville · Yoshua Bengio -
2013 Poster: Generalized Denoising Auto-Encoders as Generative Models »
Yoshua Bengio · Li Yao · Guillaume Alain · Pascal Vincent -
2013 Poster: Stochastic Ratio Matching of RBMs for Sparse High-Dimensional Inputs »
Yann Dauphin · Yoshua Bengio -
2012 Workshop: Deep Learning and Unsupervised Feature Learning »
Yoshua Bengio · James Bergstra · Quoc V. Le -
2011 Workshop: Big Learning: Algorithms, Systems, and Tools for Learning at Scale »
Joseph E Gonzalez · Sameer Singh · Graham Taylor · James Bergstra · Alice Zheng · Misha Bilenko · Yucheng Low · Yoshua Bengio · Michael Franklin · Carlos Guestrin · Andrew McCallum · Alexander Smola · Michael Jordan · Sugato Basu -
2011 Workshop: Deep Learning and Unsupervised Feature Learning »
Yoshua Bengio · Adam Coates · Yann LeCun · Nicolas Le Roux · Andrew Y Ng -
2011 Oral: The Manifold Tangent Classifier »
Salah Rifai · Yann N Dauphin · Pascal Vincent · Yoshua Bengio · Xavier Muller -
2011 Poster: Shallow vs. Deep Sum-Product Networks »
Olivier Delalleau · Yoshua Bengio -
2011 Poster: The Manifold Tangent Classifier »
Salah Rifai · Yann N Dauphin · Pascal Vincent · Yoshua Bengio · Xavier Muller -
2011 Poster: Algorithms for Hyper-Parameter Optimization »
James Bergstra · Rémi Bardenet · Yoshua Bengio · Balázs Kégl -
2011 Poster: On Tracking The Partition Function »
Guillaume Desjardins · Aaron Courville · Yoshua Bengio -
2010 Workshop: Deep Learning and Unsupervised Feature Learning »
Honglak Lee · Marc'Aurelio Ranzato · Yoshua Bengio · Geoffrey E Hinton · Yann LeCun · Andrew Y Ng -
2009 Poster: Slow, Decorrelated Features for Pretraining Complex Cell-like Networks »
James Bergstra · Yoshua Bengio -
2009 Poster: An Infinite Factor Model Hierarchy Via a Noisy-Or Mechanism »
Aaron Courville · Douglas Eck · Yoshua Bengio -
2009 Session: Debate on Future Publication Models for the NIPS Community »
Yoshua Bengio -
2007 Poster: Augmented Functional Time Series Representation and Forecasting with Gaussian Processes »
Nicolas Chapados · Yoshua Bengio -
2007 Poster: Learning the 2-D Topology of Images »
Nicolas Le Roux · Yoshua Bengio · Pascal Lamblin · Marc Joliveau · Balázs Kégl -
2007 Spotlight: Augmented Functional Time Series Representation and Forecasting with Gaussian Processes »
Nicolas Chapados · Yoshua Bengio -
2007 Poster: Topmoumoute Online Natural Gradient Algorithm »
Nicolas Le Roux · Pierre-Antoine Manzagol · Yoshua Bengio -
2006 Poster: Greedy Layer-Wise Training of Deep Networks »
Yoshua Bengio · Pascal Lamblin · Dan Popovici · Hugo Larochelle -
2006 Talk: Greedy Layer-Wise Training of Deep Networks »
Yoshua Bengio · Pascal Lamblin · Dan Popovici · Hugo Larochelle