Timezone: »
Bayesian optimization (BO) is a recent subfield of machine learning comprising a collection of methodologies for the efficient optimization of expensive black-box functions. BO techniques work by fitting a model to black-box function data and then using the model's predictions to decide where to collect data next, so that the optimization problem can be solved using only a small number of function evaluations. The resulting methods are characterized by their high sample-efficiency when compared to alternative black-box optimization algorithms, enabling the solution of new challenging problems. For example, in recent years, BO has become a popular tool in the machine learning community for the excellent performance attained in the problem of hyperparameter tuning, with important results both in academia and industry. This success has made BO a crucial player in the current trend of “automatic machine learning”.
As new BO methods have been developed, the area of applicability has been continuously expanding. While the problem of hyperparameter tuning permeates all disciplines, the field has moved towards more specific problems in science and engineering requiring of new advanced methodology. Today, Bayesian optimization is the most promising approach for accelerating and automating science and engineering. Therefore, we have chosen this year's theme for the workshop to be "Bayesian optimization for science and engineering".
We enumerate below a few of the recent directions in which BO methodology is being pushed forward to address specific problems in science and engineering:
-Beyond Gaussian processes. While GPs are the default choice in BO, specific problems in science and engineering require to collect larger amounts of complex data. In these settings, it is necessary to use alternative models to capture complex patterns with higher accuracy. For example, Bayesian neural networks or deep Gaussian processes. How can we do efficient BO with these type of models?
-Optimization in structured domains. Many problems in science and engineering require to perform optimization in complex spaces which are different from the typical box-constrained subset of the real coordinate space. For example, how can we efficiently optimize over graphs, discrete sequences, trees, computer programmes, etc.?
-Safe optimization. Critical applications in science and engineering may include configurations in the search space that may be unsafe or harmful and may lead to system failure. How can we perform efficient BO while avoiding these unsafe settings?
-Incorporating domain specific knowledge. In many optimization settings there is available a lot of domain specific information that can be used to build bespoke BO methods with significantly better performance than its off-the-shelf counterparts. How can we easily encode and transfer available knowledge into BO methods in an easy and fast manner?
-Optimization with structured output response. In specific cases, each black-box may produce additional output values besides an estimate of the objective function. For example, when tuning a simulator, besides the final output value, we may also obtain structured data related to the execution trace of the simulator. How can design BO methods that automatically exploit this extra structured output?
-Scalability and fast evaluations. Recent problems require to collect a massive amount of data in parallel and at cost that is usually lower than in typical BO problems. How can we design efficient BO methods that collect very large batches of data in parallel? How can we automatically adjust the cost of BO methods so that they are efficient even when the data collection is not highly expensive?
The target audience for this workshop consists of both industrial and academic practitioners of Bayesian optimization as well as researchers working on theoretical and practical advances in model based optimization across different engineering areas. We expect that this pairing of theoretical and applied knowledge will lead to an interesting exchange of ideas and stimulate an open discussion about the long term goals and challenges of the Bayesian optimization community.
The main goal of the workshop is to serve as a forum of discussion and to encourage collaboration between the diverse set of scientist that develop and use Bayesian optimization and related techniques. Researchers and practitioners in Academia are welcome, as well people form the wider optimization, engineering and probabilistic modeling communities.
Sat 9:00 a.m. - 9:10 a.m.
|
Introduction
(
Talk
)
|
🔗 |
Sat 9:10 a.m. - 9:40 a.m.
|
Invited talk: Towards Safe Bayesian Optimization
(
Talk
)
|
Andreas Krause 🔗 |
Sat 9:40 a.m. - 10:10 a.m.
|
Invited talk: Learning to learn without gradient descent by gradient descent.
(
Talk
)
|
Yutian Chen 🔗 |
Sat 10:10 a.m. - 10:30 a.m.
|
Poster spotlights 1
(
Talk
)
|
🔗 |
Sat 11:00 a.m. - 11:30 a.m.
|
Invited talk: Scaling Bayesian Optimization in High Dimensions
(
Talk
)
|
Stefanie Jegelka 🔗 |
Sat 11:30 a.m. - 11:45 a.m.
|
Poster spotlights 2
(
Talk
)
|
🔗 |
Sat 2:00 p.m. - 2:30 p.m.
|
Invited talk: Neuroadaptive Bayesian Optimization - Implications for Cognitive Sciences
(
Talk
)
|
Romy Lorenz 🔗 |
Sat 2:30 p.m. - 3:00 p.m.
|
Poster session
(
Posters
)
|
🔗 |
Sat 4:00 p.m. - 4:30 p.m.
|
Invited talk: Knowledge Gradient Methods for Bayesian Optimization
(
Talk
)
|
Peter Frazier 🔗 |
Sat 4:30 p.m. - 5:00 p.m.
|
Invited talk: Quantifying and reducing uncertainties on sets under Gaussian Process priors
(
Talk
)
|
David Ginsbourger 🔗 |
Sat 4:55 p.m. - 6:00 p.m.
|
Panel
(
Discussion
)
|
🔗 |
Author Information
Ruben Martinez-Cantin (Centro Universitario de la Defensa)
José Miguel Hernández-Lobato (University of Cambridge)
Javier Gonzalez (Amazon)
More from the Same Authors
-
2021 : A Fresh Look at De Novo Molecular Design Benchmarks »
Austin Tripp · Gregor Simm · José Miguel Hernández-Lobato -
2021 : Depth Uncertainty Networks for Active Learning »
Chelsea Murray · James Allingham · Javier Antorán · José Miguel Hernández-Lobato -
2022 : Flow Annealed Importance Sampling Bootstrap »
Laurence Midgley · Vincent Stimper · Gregor Simm · Bernhard Schölkopf · José Miguel Hernández-Lobato -
2022 : Meta-learning Adaptive Deep Kernel Gaussian Processes for Molecular Property Prediction »
Wenlin Chen · Austin Tripp · José Miguel Hernández-Lobato -
2022 : Learning Generative Models with Invariance to Symmetries »
James Allingham · Javier Antorán · Shreyas Padhy · Eric Nalisnick · José Miguel Hernández-Lobato -
2022 : Panel »
Roman Garnett · José Miguel Hernández-Lobato · Eytan Bakshy · Syrine Belakaria · Stefanie Jegelka -
2022 Poster: Missing Data Imputation and Acquisition with Deep Hierarchical Models and Hamiltonian Monte Carlo »
Ignacio Peis · Chao Ma · José Miguel Hernández-Lobato -
2021 Workshop: Deep Generative Models and Downstream Applications »
José Miguel Hernández-Lobato · Yingzhen Li · Yichuan Zhang · Cheng Zhang · Austin Tripp · Weiwei Pan · Oren Rippel -
2021 Poster: Functional Variational Inference based on Stochastic Process Generators »
Chao Ma · José Miguel Hernández-Lobato -
2021 Poster: Improving black-box optimization in VAE latent space using decoder uncertainty »
Pascal Notin · José Miguel Hernández-Lobato · Yarin Gal -
2020 Workshop: Machine Learning for Molecules »
José Miguel Hernández-Lobato · Matt Kusner · Brooks Paige · Marwin Segler · Jennifer Wei -
2020 : Jose Miguel Hernandez Lobato »
José Miguel Hernández-Lobato -
2020 Poster: Compressing Images by Encoding Their Latent Representations with Relative Entropy Coding »
Gergely Flamich · Marton Havasi · José Miguel Hernández-Lobato -
2020 Poster: Sample-Efficient Optimization in the Latent Space of Deep Generative Models via Weighted Retraining »
Austin Tripp · Erik Daxberger · José Miguel Hernández-Lobato -
2020 Poster: Depth Uncertainty in Neural Networks »
Javier Antorán · James Allingham · José Miguel Hernández-Lobato -
2020 Poster: VAEM: a Deep Generative Model for Heterogeneous Mixed Type Data »
Chao Ma · Sebastian Tschiatschek · Richard Turner · José Miguel Hernández-Lobato · Cheng Zhang -
2020 Poster: Barking up the right tree: an approach to search over molecule synthesis DAGs »
John Bradshaw · Brooks Paige · Matt Kusner · Marwin Segler · José Miguel Hernández-Lobato -
2020 Spotlight: Barking up the right tree: an approach to search over molecule synthesis DAGs »
John Bradshaw · Brooks Paige · Matt Kusner · Marwin Segler · José Miguel Hernández-Lobato -
2020 Session: Orals & Spotlights Track 15: COVID/Applications/Composition »
José Miguel Hernández-Lobato · Oliver Stegle -
2019 Workshop: Bayesian Deep Learning »
Yarin Gal · José Miguel Hernández-Lobato · Christos Louizos · Eric Nalisnick · Zoubin Ghahramani · Kevin Murphy · Max Welling -
2019 Poster: Bayesian Batch Active Learning as Sparse Subset Approximation »
Robert Pinsler · Jonathan Gordon · Eric Nalisnick · José Miguel Hernández-Lobato -
2019 Poster: Icebreaker: Element-wise Efficient Information Acquisition with a Bayesian Deep Latent Gaussian Model »
Wenbo Gong · Sebastian Tschiatschek · Sebastian Nowozin · Richard Turner · José Miguel Hernández-Lobato · Cheng Zhang -
2019 Poster: A Model to Search for Synthesizable Molecules »
John Bradshaw · Brooks Paige · Matt Kusner · Marwin Segler · José Miguel Hernández-Lobato -
2019 Poster: Successor Uncertainties: Exploration and Uncertainty in Temporal Difference Learning »
David Janz · Jiri Hron · Przemysław Mazur · Katja Hofmann · José Miguel Hernández-Lobato · Sebastian Tschiatschek -
2018 Workshop: Machine Learning for Molecules and Materials »
José Miguel Hernández-Lobato · Klaus-Robert Müller · Brooks Paige · Matt Kusner · Stefan Chmiela · Kristof Schütt -
2018 Workshop: Bayesian Deep Learning »
Yarin Gal · José Miguel Hernández-Lobato · Christos Louizos · Andrew Wilson · Zoubin Ghahramani · Kevin Murphy · Max Welling -
2018 Poster: Inference in Deep Gaussian Processes using Stochastic Gradient Hamiltonian Monte Carlo »
Marton Havasi · José Miguel Hernández-Lobato · Juan J. Murillo-Fuentes -
2017 Workshop: Bayesian Deep Learning »
Yarin Gal · José Miguel Hernández-Lobato · Christos Louizos · Andrew Wilson · Andrew Wilson · Diederik Kingma · Zoubin Ghahramani · Kevin Murphy · Max Welling -
2017 : Closing remarks »
José Miguel Hernández-Lobato -
2017 Workshop: Machine Learning for Molecules and Materials »
Kristof Schütt · Klaus-Robert Müller · Anatole von Lilienfeld · José Miguel Hernández-Lobato · Klaus-Robert Müller · Alan Aspuru-Guzik · Bharath Ramsundar · Matt Kusner · Brooks Paige · Stefan Chmiela · Alexandre Tkatchenko · Anatole von Lilienfeld · Koji Tsuda -
2016 : Panel Discussion »
Shakir Mohamed · David Blei · Ryan Adams · José Miguel Hernández-Lobato · Ian Goodfellow · Yarin Gal -
2016 : Automatic Chemical Design using Variational Autoencoders »
José Miguel Hernández-Lobato -
2016 : Alpha divergence minimization for Bayesian deep learning »
José Miguel Hernández-Lobato -
2016 Workshop: Bayesian Optimization: Black-box Optimization and Beyond »
Roberto Calandra · Bobak Shahriari · Javier Gonzalez · Frank Hutter · Ryan Adams -
2015 : In Silico Design of Synthetic Genes for Total Cell Translation Control: a Multi-output Gaussian Processes approach »
Javier Gonzalez -
2015 Poster: Stochastic Expectation Propagation »
Yingzhen Li · José Miguel Hernández-Lobato · Richard Turner -
2015 Spotlight: Stochastic Expectation Propagation »
Yingzhen Li · José Miguel Hernández-Lobato · Richard Turner -
2014 Poster: Predictive Entropy Search for Efficient Global Optimization of Black-box Functions »
José Miguel Hernández-Lobato · Matthew Hoffman · Zoubin Ghahramani -
2014 Poster: Gaussian Process Volatility Model »
Yue Wu · José Miguel Hernández-Lobato · Zoubin Ghahramani -
2014 Spotlight: Predictive Entropy Search for Efficient Global Optimization of Black-box Functions »
José Miguel Hernández-Lobato · Matthew Hoffman · Zoubin Ghahramani -
2013 Poster: Learning Feature Selection Dependencies in Multi-task Learning »
Daniel Hernández-lobato · José Miguel Hernández-Lobato -
2013 Poster: Gaussian Process Conditional Copulas with Applications to Financial Time Series »
José Miguel Hernández-Lobato · James R Lloyd · Daniel Hernández-lobato -
2012 Poster: Collaborative Gaussian Processes for Preference Learning »
Neil Houlsby · José Miguel Hernández-Lobato · Ferenc Huszar · Zoubin Ghahramani -
2012 Poster: Semi-Supervised Domain Adaptation with Non-Parametric Copulas »
David Lopez-Paz · José Miguel Hernández-Lobato · Bernhard Schölkopf -
2012 Spotlight: Semi-Supervised Domain Adaptation with Non-Parametric Copulas »
David Lopez-Paz · José Miguel Hernández-Lobato · Bernhard Schölkopf -
2011 Poster: Robust Multi-Class Gaussian Process Classification »
Daniel Hernández-lobato · José Miguel Hernández-Lobato · Pierre Dupont -
2009 Workshop: Adaptive Sensing, Active Learning, and Experimental Design »
Rui M Castro · Nando de Freitas · Ruben Martinez-Cantin -
2007 Poster: Regulator Discovery from Gene Expression Time Series of Malaria Parasites: a Hierachical Approach »
José Miguel Hernández-Lobato · Tjeerd M Dijkstra · Tom Heskes