Workshop on Bayesian optimization & related topics


When & where

The workshop will take place IRL, on June 20, 2024 at Institut Henri Poincaré, Paris.

Room: amphithéâtre Hermite (ground floor).




Presentation of the workshop

Workshop on Bayesian optimization & related topics

Organizers: Céline Helbert, Delphine Sinoquet & Julien Bect.


Registration

Registration is free but appreciated.


Agenda


Speakers

Rodolphe Le Riche (CNRS LIMOS, Mines de Saint-Etienne and UCA)
Bayesian Optimization in the presence of uncertainties: an overview in 2024abstract

Daniel Hernandez Lobato (Universidad Autónoma de Madrid)
Parallel predictive entropy search for multi-objective Bayesian optimization with constraintsabstract

Clément Royer (LAMSADE, Université Paris Dauphine-PSL)
Random subspaces and expected decrease in derivative-free optimizationabstract

Mathieu Balesdent (ONERA)
Bayesian Quality-Diversity approaches for constrained optimization problems with mixed continuous, discrete and categorical variablesabstract


Abstracts

Rodolphe Le Riche

Bayesian Optimization in the presence of uncertainties: an overview in 2024

Most functions of interest in optimization model a certain reality and contain uncertainties that either come from a lack of knowledge or from inherently random phenomena. Optimizing such functions requires special precautions. The importance of this class of problems has given rise to scientific communities that identify themselves under terms such as "robust optimization", "reliability based design optimization", "noisy optimization" and "optimization under uncertainties". In the first part of this talk, we review many of these works in a common framework. We start with the problem formulation, that is the statistical measures that are optimized. Gaussian processes (GPs) are often involved in the estimation of these statistical measures. Bayesian optimization under uncertainty consists in optimizing such GP-based estimators. This constitutes the focus of the review. In the second part of the talk, we summarize a specific series of works on Bayesian optimization in the presence of uncertainties [1,2]. The distinctive feature of these methods is that the parameters carrying the uncertainty can be optimally chosen together with the optimization variables.

[1] Reda El Amri, Rodolphe Le Riche, Céline Helbert, Christophette Blanchet-Scalliet and Sébastien Da Veiga, A sampling criterion for constrained Bayesian optimization with uncertainties, SMAI Journal of Computational Mathematics, Vol.9, pp. 285-309, doi : 10.5802/smai-jcm.102, 2023.

[2] Julien Pelamatti, Rodolphe Le Riche, Céline Helbert and Christophette Blanchet-Scalliet, Coupling and selecting constraints in Bayesian optimization under uncertainties, Optimization and Engineering, doi: 10.1007/s11081-023-09807-x, April 2023.

(joint work with Julien Pelamatti, Reda El Amri, Céline Helbert and Christophette Blanchet-Scalliet)


Daniel Hernandez Lobato

Parallel predictive entropy search for multi-objective Bayesian optimization with constraints

Real-world problems often involve the optimization of several objectives under multiple constraints. An example is the hyper-parameter tuning problem of machine learning algorithms. For example, minimizing both an estimate of the generalization error of a deep neural network and its prediction time. We may also consider, as a constraint, that the deep neural network must be implemented in a chip with an area below some size. Here, both the objectives and the constraint are black boxes, i.e., functions whose analytical expressions are unknown and are expensive to evaluate. Bayesian optimization (BO) methods have shown state-of-the-art results in these tasks. For this, they evaluate iteratively, at carefully chosen locations, the objectives and the constraints with the goal of solving the optimization problem in a small number of iterations. Nevertheless, most BO methods are sequential and perform evaluations at just one input location, at each iteration. Sometimes, however, we may evaluate several configurations in parallel. If this is the case, as when a cluster of computers is available, sequential evaluations result in a waste of resources. To avoid this, one has to choose which locations to evaluate in parallel, at each iteration. This talk introduces PPESMOC, Parallel Predictive Entropy Search for Multi-objective Bayesian Optimization with Constraints, an information-based batch method for the simultaneous optimization of multiple expensive-to-evaluate black-box functions under the presence of several constraints. Iteratively, PPESMOC selects a batch of input locations at which to evaluate the black-boxes in parallel so as to maximally reduce the entropy of the Pareto set of the optimization problem. To our knowledge, this is the first information-based batch method for constrained multi-objective BO. We present empirical evidence in the form of several optimization problems that illustrate the effectiveness of PPESMOC. Moreover, we also show in several experiments the utility of the proposed method to tune the hyper-parameters of machine learning algorithms.

(joint work with Eduardo C. Garrido-Merchán)


Clément Royer

Random subspaces and expected decrease in derivative-free optimization

Derivative-free algorithms seek the minimum of a given function based only on function values queried at appropriate points. Although these methods are widely used in practice, their performance is known to worsen as the problem dimension increases. Recent advances in developing randomized derivative-free techniques have tackled this issue by working in low-dimensional subspaces that are drawn at random in an iterative fashion. The connection between the dimension of these random subspaces and the algorithmic guarantees has yet to be fully understood. This talk will describe several strategies to select random subspaces within a derivative-free algorithm. We will explain how probabilistic convergence rates can be obtained for such a method, then provide numerical evidence that using low-dimensional random subspaces leads to the best practical performance. We investigate this behavior through a novel, expected decrease analysis that highlights a connection between subspace dimension and per-iteration decrease guarantees.

(joint work with Warren Hare and Lindon Roberts)


Mathieu Balesdent

Bayesian Quality-Diversity approaches for constrained optimization problems with mixed continuous, discrete and categorical variables

Complex engineering design problems, such as those involved in aerospace, civil, or energy engineering, require the use of numerically costly simulation codes in order to predict the behavior and performance of the system to be designed. To perform the design of the systems, these codes are often embedded into an optimization process to provide the best design while satisfying the design constraints. Recently, new approaches, called Quality-Diversity, have been proposed in order to enhance the exploration of the design space and to provide a set of optimal diversified solutions with respect to some feature functions. These functions are interesting to assess trade-offs. Furthermore, complex engineering design problems often involve mixed continuous, discrete, and categorical design variables allowing to take into account technological choices in the optimization problem. This talk will discuss Quality-Diversity methodologies based on mixed continuous, discrete and categorical Bayesian optimization strategy. These approaches allow to reduce the computational cost with respect to classical Quality - Diversity approaches while dealing with discrete choices and constraints. The performance of the methods will be discussed on a benchmark of analytical problems as well as on an industrial design optimization problem dealing with aerospace systems.

(joint work with Loïc Brevault)