Workshop on Bayesian optimization & related topics


When & where

The workshop will take place IRL, on June 20, 2024 at Institut Henri Poincaré, Paris.

Room: amphithéâtre Hermite (ground floor).




Presentation of the workshop

Workshop on Bayesian optimization & related topics

Organizers: Céline Helbert, Delphine Sinoquet & Julien Bect.


Registration

Registration is free but appreciated.


Agenda


Speakers

Permanent researchers / ~ 35-minute talks

Rodolphe Le Riche (CNRS LIMOS, Mines de Saint-Etienne and UCA)
Bayesian optimization with derivatives accelerationabstract

Daniel Hernandez Lobato (Universidad Autónoma de Madrid)
Parallel predictive entropy search for multi-objective Bayesian optimization with constraintsabstract

Clément Royer (LAMSADE, Université Paris Dauphine-PSL)
Random subspaces and expected decrease in derivative-free optimizationabstract

Mathieu Balesdent (ONERA)
Bayesian Quality-Diversity approaches for constrained optimization problems with mixed continuous, discrete and categorical variablesabstract


Young researchers / ~ 20-minute talks

Hadi Nasser (IMSE, Université Gustave Eiffel)
Multi-fidelity approaches for solving inverse problems relying on computer codes with functional outputs — Application to building envelope thermal performanceabstract

Sanae Janati Idrissi (CMAP, Ecole polytechnique)
Bayesian calibration of computer codes with adaptive surrogate modelingabstract

Adán Reyes Reyes (IFP Energies nouvelles, Université Paris-Saclay)
Electrical Machines design via Bayesian optimization under uncertaintiesabstract

Paul Saves (ONERA)
Gaussian process for Bayesian optimization with mixed hierarchical variables: Application to electric-hybrid aircraft eco-designabstract

Noé Fellmann (ECL, IFP Energies Nouvelles)
Sensitivity Analysis in Constrained Bayesian Optimization with Uncertaintiesabstract


Abstracts

Rodolphe Le Riche

Bayesian optimization with derivatives acceleration

Bayesian optimization algorithms form an important class of methods to minimize functions that are costly to evaluate, which is a very common situation. These algorithms iteratively infer Gaussian processes from past observations of the function and decide where new observations should be made through the maximization of an acquisition criterion. Often, in particular in engineering practice, the objective function is defined on a compact set such as in a hyper-rectangle of a d-dimensional real space, and the bounds are chosen wide enough so that the optimum is inside the search domain. In this situation, this work provides a way to integrate in the acquisition criterion the a priori information that these functions, once modeled as GP trajectories, should be evaluated at their minima, and not at any point as usual acquisition criteria do. We propose an adaptation of the widely used Expected Improvement acquisition criterion that accounts only for GP trajectories where the first order partial derivatives are zero and the Hessian matrix is positive definite. The new acquisition criterion keeps an analytical, computationally efficient, expression. This new acquisition criterion is found to improve Bayesian optimization on a test bed of functions made of Gaussian process trajectories in dimensions 2, 3 and 5. The addition of first and second order derivative information is particularly useful for multimodal functions.

(joint work with Guillaume Perrin)


Daniel Hernandez Lobato

Parallel predictive entropy search for multi-objective Bayesian optimization with constraints

Real-world problems often involve the optimization of several objectives under multiple constraints. An example is the hyper-parameter tuning problem of machine learning algorithms. For example, minimizing both an estimate of the generalization error of a deep neural network and its prediction time. We may also consider, as a constraint, that the deep neural network must be implemented in a chip with an area below some size. Here, both the objectives and the constraint are black boxes, i.e., functions whose analytical expressions are unknown and are expensive to evaluate. Bayesian optimization (BO) methods have shown state-of-the-art results in these tasks. For this, they evaluate iteratively, at carefully chosen locations, the objectives and the constraints with the goal of solving the optimization problem in a small number of iterations. Nevertheless, most BO methods are sequential and perform evaluations at just one input location, at each iteration. Sometimes, however, we may evaluate several configurations in parallel. If this is the case, as when a cluster of computers is available, sequential evaluations result in a waste of resources. To avoid this, one has to choose which locations to evaluate in parallel, at each iteration. This talk introduces PPESMOC, Parallel Predictive Entropy Search for Multi-objective Bayesian Optimization with Constraints, an information-based batch method for the simultaneous optimization of multiple expensive-to-evaluate black-box functions under the presence of several constraints. Iteratively, PPESMOC selects a batch of input locations at which to evaluate the black-boxes in parallel so as to maximally reduce the entropy of the Pareto set of the optimization problem. To our knowledge, this is the first information-based batch method for constrained multi-objective BO. We present empirical evidence in the form of several optimization problems that illustrate the effectiveness of PPESMOC. Moreover, we also show in several experiments the utility of the proposed method to tune the hyper-parameters of machine learning algorithms.

(joint work with Eduardo C. Garrido-Merchán)


Clément Royer

Random subspaces and expected decrease in derivative-free optimization

Derivative-free algorithms seek the minimum of a given function based only on function values queried at appropriate points. Although these methods are widely used in practice, their performance is known to worsen as the problem dimension increases. Recent advances in developing randomized derivative-free techniques have tackled this issue by working in low-dimensional subspaces that are drawn at random in an iterative fashion. The connection between the dimension of these random subspaces and the algorithmic guarantees has yet to be fully understood. This talk will describe several strategies to select random subspaces within a derivative-free algorithm. We will explain how probabilistic convergence rates can be obtained for such a method, then provide numerical evidence that using low-dimensional random subspaces leads to the best practical performance. We investigate this behavior through a novel, expected decrease analysis that highlights a connection between subspace dimension and per-iteration decrease guarantees.

(joint work with Warren Hare and Lindon Roberts)


Mathieu Balesdent

Bayesian Quality-Diversity approaches for constrained optimization problems with mixed continuous, discrete and categorical variables

Complex engineering design problems, such as those involved in aerospace, civil, or energy engineering, require the use of numerically costly simulation codes in order to predict the behavior and performance of the system to be designed. To perform the design of the systems, these codes are often embedded into an optimization process to provide the best design while satisfying the design constraints. Recently, new approaches, called Quality-Diversity, have been proposed in order to enhance the exploration of the design space and to provide a set of optimal diversified solutions with respect to some feature functions. These functions are interesting to assess trade-offs. Furthermore, complex engineering design problems often involve mixed continuous, discrete, and categorical design variables allowing to take into account technological choices in the optimization problem. This talk will discuss Quality-Diversity methodologies based on mixed continuous, discrete and categorical Bayesian optimization strategy. These approaches allow to reduce the computational cost with respect to classical Quality - Diversity approaches while dealing with discrete choices and constraints. The performance of the methods will be discussed on a benchmark of analytical problems as well as on an industrial design optimization problem dealing with aerospace systems.

(joint work with Loïc Brevault)


Hadi Nasser

Multi-fidelity approaches for solving inverse problems relying on computer codes with functional outputs — Application to building envelope thermal performance

In the context of energy and environmental renovation, significant progress is expected and needed in the building sector. The thermal insulation of buildings is a key factor in ensuring the thermal comfort of occupants and achieving high energy savings. Therefore, there is a growing need to control the thermal resistance of walls (an indicator of the level of insulation) through in-situ measurements in existing and new buildings or during renovations. The work to be presented focuses on the identification of the thermal resistance of a wall with internal insulation. To achieve this, a Bayesian sequential multifidelity statistical approach will be developed to create a surrogate model that represents the best compromise between computational cost and expected uncertainty reduction, accurately identifying the thermal resistance of a given wall. To achieve this goal, an original problem formulation will be used to minimise the influence of uncontrolled variables (such as heat exchange coefficients between the wall and the exterior, the initial state of the wall, etc.), and a Bayesian formulation will be applied to incorporate model and measurement uncertainties into the thermal resistance estimation process. This protocol will be tested on a 4-layer wall with different thermal characteristics using temperature and flux simulations based on 0D, 1D and 2D axisymmetric physical wall models. The aim is not only to provide an estimate of the thermal resistance of the wall, but also to provide a confidence interval for the estimated parameters. This is done by quantifying the different sources of uncertainty.

(joint work with Guillaume Perrin, Julien Waeytens, Rachida Chakir & Séverine Demeyer)


Sanae Janati Idrissi

Bayesian calibration of computer codes with adaptive surrogate modeling

Computer codes are crucial for tackling complex engineering problems, especially in the nuclear field. However, these codes are based on complex models involving several context-specific parameters that may be unknown. To infer the best values of these parameters, we rely on a Bayesian calibration technique called the Full maximum a posteriori method (FMP) where a statistical model error is identified for every possible model parameter value. However, the FMP calibration technique can be computationally expensive due to the large number of model simulations that need to be run. In this work, we propose an adaptive approach for constructing a surrogate model of the computer code that minimizes the approximation error on regions of the parameter domain corresponding to high FMP posterior density.

This approach is tested on analytical functions. It is also applied to a complex two-phase flow model to calibrate the parameters of the coalescence and breakup mechanisms in the IATE equations in the bubbly flow regime. Results show that the proposed method improves computational efficiency by reducing the number of actual model evaluations required for surrogate construction compared to conventional space-filling designs

(joint work with Pietro Marco Congedo, Olivier Le Maitre, Maria Giovanna Rodio)


Adán Reyes Reyes

Electrical Machines design via Bayesian optimization under uncertainties

With increasing climate change concerns, adopting measures to curb greenhouse gas emissions has become imperative. Electric and hybrid vehicles have emerged as promising alternatives to internal combustion engine vehicles in transportation. Central to electric vehicles' performance is the electric machine's design, a crucial component of the powertrain. Contemporary design approaches employ multi-objective optimization methods coupled with sophisticated costly-to-compute physical models to describe the machine's behavior accurately. In practical scenarios, deviations between the model input values and of the effective values of geometric parameters and material properties are often observed. These deviations arise from manufacturing tolerances, impacting dimensions and material properties, consequently affecting the predicted performance of the electric machine.

To overcome this challenge, this work focuses on an electric machine design problem formulated as an optimization problem with chance constraints and objectives functions expressed as expectations regarding the uncertainties. A multi-objective constrained Bayesian optimization algorithm taking into account the uncertainties and allowing parallel simulations is thus proposed.

This algorithm has been tested on several electrical machine design problems where the uncertainties come both from geometrical variables, which are also the controllable variables, and from materials’ physical properties. Different levels of probability for constraint satisfaction were tested in order to help the engineer to validate the best compromises for its application and also to study the scope of application of the proposed algorithm.

(joint work with André Nasr, Sami Hlioui and Delphine Sinoquet)


Paul Saves

Gaussian process for Bayesian optimization with mixed hierarchical variables: Application to electric-hybrid aircraft eco-design

Multidisciplinary design engineering problems, such as those encountered in electric-hybrid aircraft design, often involve a complex interplay of aerodynamic, structural, and electric propulsion variables. This complexity results in high-dimensional and computationally expensive black-box problems where the design variables are not only continuous but also hierarchical, variable-size, and categorical.

Traditional modeling and optimization methods struggle to efficiently tackle such high-dimensional and mixed-variable hierarchical design spaces. Therefore, this work introduces a surrogate-based optimization framework able to handle both continuous and categorical design variables within a hierarchical structure. We propose to model and incorporate hierarchy information into the Gaussian process, enabling more accurate surrogate modeling of the underlying black-box objective function. This work showcases the effectiveness and scalability of our method. When applied to the “DRAGON" electric-hybrid aircraft concept, significant improvements in computational time and convergence rate are obtained, highlighting the potential of hierarchical GP for modeling variable-size black-box problems.

(joint work with Eric Nguyen Van, Nathalie Bartoli, Thierry Lefebvre, Youssef Diouane, Joseph Morlier)


Noé Fellmann

Sensitivity Analysis in Constrained Bayesian Optimization with Uncertainties

This study tackles the challenge of chance-constrained optimization under uncertainties, which entails significant computational burdens in practical applications. We utilize EFISUR, an existing algorithm for Bayesian optimization. It relies on modeling the problem using Gaussian process regression in the joint space and defining an acquisition criterion in the joint optimized-uncertain input space that considers both the average improvement in the objective function and the reliability of the constraints. However, high dimensionality in either the design space or the uncertain parameter space can pose challenges due to the complexity of the optimization steps and Gaussian Processes (GPs) modeling.

To address this issue, we propose different strategies to integrate sensitivity analysis (SA) into EFISUR. Through comprehensive numerical test cases, we enhance the efficacy of EFISUR (even in small dimensions) and enable its applicability in higher dimensions.

(joint work with Christophette Blanchet-Scalliet, Céline Helbert, Delphine Sinoquet and Adrien Spagnol)