NIPS 2016
Skip to yearly menu bar Skip to main content


Bayesian Optimization: Black-box Optimization and Beyond

Roberto Calandra · Bobak Shahriari · Javier Gonzalez · Frank Hutter · Ryan Adams

Room 117

Bayesian optimization has emerged as an exciting subfield of machine learning that is concerned with the global optimization of expensive, noisy, black-box functions using probabilistic methods. Systems implementing Bayesian optimization techniques have been successfully used to solve difficult problems in a diverse set of applications. Many recent advances in the methodologies and theory underlying Bayesian optimization have extended the framework to new applications and provided greater insights into the behaviour of these algorithms. Bayesian optimization is now increasingly being used in industrial settings, providing new and interesting challenges that require new algorithms and theoretical insights.
Classically, Bayesian optimization has been used purely for expensive single-objective black-box optimization. However, with the increased complexity of tasks and applications, this paradigm is proving to be too restricted. Hence, this year’s theme for the workshop will be “black-box optimization and beyond”. Among the recent trends that push beyond BO we can briefly enumerate:
- Adapting BO to not-so-expensive evaluations.
- “Open the black-box” and move away from viewing the model as a way of simply fitting a response surface, and towards modelling for the purpose of discovering and understanding the underlying process. For instance, this so-called grey-box modelling approach could be valuable in robotic applications for optimizing the controller, while simultaneously providing insight into the mechanical properties of the robotic system.
- “Meta-learning”, where a higher level of learning is used on top of BO in order to control the optimization process and make it more efficient. Examples of such meta-learning include learning curve prediction, Freeze-thaw Bayesian optimization, online batch selection, multi-task and multi-fidelity learning.
- Multi-objective optimization where not a single objective, but multiple conflicting objectives are considered (e.g., prediction accuracy vs training time).
The target audience for this workshop consists of both industrial and academic practitioners of Bayesian optimization as well as researchers working on theoretical and practical advances in probabilistic optimization. We expect that this pairing of theoretical and applied knowledge will lead to an interesting exchange of ideas and stimulate an open discussion about the long term goals and challenges of the Bayesian optimization community.
A further goal is to encourage collaboration between the diverse set of researchers involved in Bayesian optimization. This includes not only interchange between industrial and academic researchers, but also between the many different subfields of machine learning which make use of Bayesian optimization or its components. We are also reaching out to the wider optimization and engineering communities for involvement.

Live content is unavailable. Log in and register to view live content

Timezone: America/Los_Angeles


Log in and register to view live content